WorldWideScience

Sample records for soft error rate

  1. Evaluation of soft errors rate in a commercial memory EEPROM

    International Nuclear Information System (INIS)

    Claro, Luiz H.; Silva, A.A.; Santos, Jose A.

    2011-01-01

    Soft errors are transient circuit errors caused by external radiation. When an ion intercepts a p-n region in an electronic component, the ionization produces excess charges along the track. These charges when collected can flip internal values, especially in memory cells. The problem affects not only space application but also terrestrial ones. Neutrons induced by cosmic rays and alpha particles, emitted from traces of radioactive contaminants contained in packaging and chip materials, are the predominant sources of radiation. The soft error susceptibility is different for different memory technology hence the experimental study are very important for Soft Error Rate (SER) evaluation. In this work, the methodology for accelerated tests is presented with the results for SER in a commercial electrically erasable and programmable read-only memory (EEPROM). (author)

  2. Accelerated testing for cosmic soft-error rate

    International Nuclear Information System (INIS)

    Ziegler, J.F.; Muhlfeld, H.P.; Montrose, C.J.; Curtis, H.W.; O'Gorman, T.J.; Ross, J.M.

    1996-01-01

    This paper describes the experimental techniques which have been developed at IBM to determine the sensitivity of electronic circuits to cosmic rays at sea level. It relates IBM circuit design and modeling, chip manufacture with process variations, and chip testing for SER sensitivity. This vertical integration from design to final test and with feedback to design allows a complete picture of LSI sensitivity to cosmic rays. Since advanced computers are designed with LSI chips long before the chips have been fabricated, and the system architecture is fully formed before the first chips are functional, it is essential to establish the chip reliability as early as possible. This paper establishes techniques to test chips that are only partly functional (e.g., only 1Mb of a 16Mb memory may be working) and can establish chip soft-error upset rates before final chip manufacturing begins. Simple relationships derived from measurement of more than 80 different chips manufactured over 20 years allow total cosmic soft-error rate (SER) to be estimated after only limited testing. Comparisons between these accelerated test results and similar tests determined by ''field testing'' (which may require a year or more of testing after manufacturing begins) show that the experimental techniques are accurate to a factor of 2

  3. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  4. Soft error rate analysis methodology of multi-Pulse-single-event transients

    International Nuclear Information System (INIS)

    Zhou Bin; Huo Mingxue; Xiao Liyi

    2012-01-01

    As transistor feature size scales down, soft errors in combinational logic because of high-energy particle radiation is gaining more and more concerns. In this paper, a combinational logic soft error analysis methodology considering multi-pulse-single-event transients (MPSETs) and re-convergence with multi transient pulses is proposed. In the proposed approach, the voltage pulse produced at the standard cell output is approximated by a triangle waveform, and characterized by three parameters: pulse width, the transition time of the first edge, and the transition time of the second edge. As for the pulse with the amplitude being smaller than the supply voltage, the edge extension technique is proposed. Moreover, an efficient electrical masking model comprehensively considering transition time, delay, width and amplitude is proposed, and an approach using the transition times of two edges and pulse width to compute the amplitude of pulse is proposed. Finally, our proposed firstly-independently-propagating-secondly-mutually-interacting (FIP-SMI) is used to deal with more practical re-convergence gate with multi transient pulses. As for MPSETs, a random generation model of MPSETs is exploratively proposed. Compared to the estimates obtained using circuit level simulations by HSpice, our proposed soft error rate analysis algorithm has 10% errors in SER estimation with speed up of 300 when the single-pulse-single-event transient (SPSET) is considered. We have also demonstrated the runtime and SER decrease with the increment of P0 using designs from the ISCAS-85 benchmarks. (authors)

  5. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    Science.gov (United States)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  6. Modeling the cosmic-ray-induced soft-error rate in integrated circuits: An overview

    International Nuclear Information System (INIS)

    Srinivasan, G.R.

    1996-01-01

    This paper is an overview of the concepts and methodologies used to predict soft-error rates (SER) due to cosmic and high-energy particle radiation in integrated circuit chips. The paper emphasizes the need for the SER simulation using the actual chip circuit model which includes device, process, and technology parameters as opposed to using either the discrete device simulation or generic circuit simulation that is commonly employed in SER modeling. Concepts such as funneling, event-by-event simulation, nuclear history files, critical charge, and charge sharing are examined. Also discussed are the relative importance of elastic and inelastic nuclear collisions, rare event statistics, and device vs. circuit simulations. The semi-empirical methodologies used in the aerospace community to arrive at SERs [also referred to as single-event upset (SEU) rates] in integrated circuit chips are reviewed. This paper is one of four in this special issue relating to SER modeling. Together, they provide a comprehensive account of this modeling effort, which has resulted in a unique modeling tool called the Soft-Error Monte Carlo Model, or SEMM

  7. Calculation of the soft error rate of submicron CMOS logic circuits

    International Nuclear Information System (INIS)

    Juhnke, T.; Klar, H.

    1995-01-01

    A method to calculate the soft error rate (SER) of CMOS logic circuits with dynamic pipeline registers is described. This method takes into account charge collection by drift and diffusion. The method is verified by comparison of calculated SER's to measurement results. Using this method, the SER of a highly pipelined multiplier is calculated as a function of supply voltage for a 0.6 microm, 0.3 microm, and 0.12 microm technology, respectively. It has been found that the SER of such highly pipelined submicron CMOS circuits may become too high so that countermeasures have to be taken. Since the SER greatly increases with decreasing supply voltage, low-power/low-voltage circuits may show more than eight times the SER for half the normal supply voltage as compared to conventional designs

  8. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  9. Soft error rate estimations of the Kintex-7 FPGA within the ATLAS Liquid Argon (LAr) Calorimeter

    International Nuclear Information System (INIS)

    Wirthlin, M J; Harding, A; Takai, H

    2014-01-01

    This paper summarizes the radiation testing performed on the Xilinx Kintex-7 FPGA in an effort to determine if the Kintex-7 can be used within the ATLAS Liquid Argon (LAr) Calorimeter. The Kintex-7 device was tested with wide-spectrum neutrons, protons, heavy-ions, and mixed high-energy hadron environments. The results of these tests were used to estimate the configuration ram and block ram upset rate within the ATLAS LAr. These estimations suggest that the configuration memory will upset at a rate of 1.1 × 10 −10 upsets/bit/s and the bram memory will upset at a rate of 9.06 × 10 −11 upsets/bit/s. For the Kintex 7K325 device, this translates to 6.85 × 10 −3 upsets/device/s for configuration memory and 1.49 × 10 −3 for block memory

  10. A Simulation-Based Soft Error Estimation Methodology for Computer Systems

    OpenAIRE

    Sugihara, Makoto; Ishihara, Tohru; Hashimoto, Koji; Muroyama, Masanori

    2006-01-01

    This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates th...

  11. Terrestrial neutron-induced soft errors in advanced memory devices

    CERN Document Server

    Nakamura, Takashi; Ibe, Eishi; Yahagi, Yasuo; Kameyama, Hideaki

    2008-01-01

    Terrestrial neutron-induced soft errors in semiconductor memory devices are currently a major concern in reliability issues. Understanding the mechanism and quantifying soft-error rates are primarily crucial for the design and quality assurance of semiconductor memory devices. This book covers the relevant up-to-date topics in terrestrial neutron-induced soft errors, and aims to provide succinct knowledge on neutron-induced soft errors to the readers by presenting several valuable and unique features. Sample Chapter(s). Chapter 1: Introduction (238 KB). Table A.30 mentioned in Appendix A.6 on

  12. Soft errors in modern electronic systems

    CERN Document Server

    Nicolaidis, Michael

    2010-01-01

    This book provides a comprehensive presentation of the most advanced research results and technological developments enabling understanding, qualifying and mitigating the soft errors effect in advanced electronics, including the fundamental physical mechanisms of radiation induced soft errors, the various steps that lead to a system failure, the modelling and simulation of soft error at various levels (including physical, electrical, netlist, event driven, RTL, and system level modelling and simulation), hardware fault injection, accelerated radiation testing and natural environment testing, s

  13. Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment

    Science.gov (United States)

    Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.

    2016-11-01

    This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.

  14. A software solution to estimate the SEU-induced soft error rate for systems implemented on SRAM-based FPGAs

    International Nuclear Information System (INIS)

    Wang Zhongming; Lu Min; Yao Zhibin; Guo Hongxia

    2011-01-01

    SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach. (semiconductor integrated circuits)

  15. Soft error mechanisms, modeling and mitigation

    CERN Document Server

    Sayil, Selahattin

    2016-01-01

    This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption an...

  16. Field testing for cosmic ray soft errors in semiconductor memories

    International Nuclear Information System (INIS)

    O'Gorman, T.J.; Ross, J.M.; Taber, A.H.; Ziegler, J.F.; Muhlfeld, H.P.; Montrose, C.J.; Curtis, H.W.; Walsh, J.L.

    1996-01-01

    This paper presents a review of experiments performed by IBM to investigate the causes of soft errors in semiconductor memory chips under field test conditions. The effects of alpha-particles and cosmic rays are separated by comparing multiple measurements of the soft-error rate (SER) of samples of memory chips deep underground and at various altitudes above the earth. The results of case studies on four different memory chips show that cosmic rays are an important source of the ionizing radiation that causes soft errors. The results of field testing are used to confirm the accuracy of the modeling and the accelerated testing of chips

  17. Modelling and mitigation of soft-errors in CMOS processors

    NARCIS (Netherlands)

    Rohani, A.

    2014-01-01

    The topic of this thesis is about soft-errors in digital systems. Different aspects of soft-errors have been addressed here, including an accurate simulation model to emulate soft-errors in a gate-level net list, a simulation framework to study the impact of soft-errors in a VHDL design and an

  18. An Investigation into Soft Error Detection Efficiency at Operating System Level

    OpenAIRE

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and soft...

  19. A Physics-Based Engineering Methodology for Calculating Soft Error Rates of Bulk CMOS and SiGe Heterojunction Bipolar Transistor Integrated Circuits

    Science.gov (United States)

    Fulkerson, David E.

    2010-02-01

    This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.

  20. Soft errors from particles to circuits

    CERN Document Server

    Autran, Jean-Luc

    2015-01-01

    ""Soft Errors: From Particles to Circuits covers all aspects of the design, use, application, performance, and testing of parts, devices, and systems and addresses every perspective from an engineering, scientific, or physical point of view. … Many good texts have been written on similar subjects, but none as thorough, as clear, and as complete as this volume. … [The authors] have mastered the past, absorbed the present, and captured the trends of the future in one of the most important technologies of our time. … An extremely useful text that has succeeded in presenting wit

  1. Alpha particle induced soft errors in NMOS RAMs: a review

    International Nuclear Information System (INIS)

    Carter, P.M.; Wilkins, B.R.

    1987-01-01

    The paper aims to explain the alpha particle induced soft error phenomenon using the NMOS dynamic random access memory (RAM) as a model. It discusses some of the many techniques experimented with by manufacturers to overcome the problem, and gives a review of the literature covering most aspects of soft errors in dynamic RAMs. Finally, the soft error performance of current dynamic RAM and static RAM products from several manufacturers are compared. (author)

  2. Soft error evaluation in SRAM using α sources

    International Nuclear Information System (INIS)

    He Chaohui; Chu Jun; Ren Xueming; Xia Chunmei; Yang Xiupei; Zhang Weiwei; Wang Hongquan; Xiao Jiangbo; Li Xiaolin

    2006-01-01

    Soft errors in memories influence directly the reliability of products. To compare the ability of three different memories against soft errors by experiments of alpha particles irradiation, the numbers of soft errors are measured for three different SRAMs and the cross sections of single event upset (SEU) and failures in time (FIT) are calculated. According to the cross sections of SEU, the ability of A166M against soft errors is the best and then B166M, the last B200M. The average FIT of B166M is smaller than that of B200M, and that of A166M is the biggest among them. (authors)

  3. Neutron-induced soft errors in CMOS circuits

    International Nuclear Information System (INIS)

    Hazucha, P.

    1999-01-01

    The subject of this thesis is a systematic study of soft errors occurring in CMOS integrated circuits when being exposed to radiation. The vast majority of commercial circuits operate in the natural environment ranging from the sea level to aircraft flight altitudes (less than 20 km), where the errors are caused mainly by interaction of atmospheric neutrons with silicon. Initially, the soft error rate (SER) of a static memory was measured for supply voltages from 2V to 5V when irradiated by 14 MeV and 100 MeV neutrons. Increased error rate due to the decreased supply voltage has been identified as a potential hazard for operation of future low-voltage circuits. A novel methodology was proposed for accurate SER characterization of a manufacturing process and it was validated by measurements on a 0.6 μm process and 100 MeV neutrons. The methodology can be applied to the prediction of SER in the natural environment

  4. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  5. A Quatro-Based 65-nm Flip-Flop Circuit for Soft-Error Resilience

    Science.gov (United States)

    Li, Y.-Q.; Wang, H.-B.; Liu, R.; Chen, L.; Nofal, I.; Shi, S.-T.; He, A.-L.; Guo, G.; Baeg, S. H.; Wen, S.-J.; Wong, R.; Chen, M.; Wu, Q.

    2017-06-01

    A flip-flop circuit hardened against soft errors is presented in this paper. This design is an improved version of Quatro for further enhanced soft-error resilience by integrating the guard-gate technique. The proposed design, as well as reference Quatro and regular flip-flops, was implemented and manufactured in a 65-nm CMOS bulk technology. Experimental characterization results of their alpha and heavy ions soft-error rates verified the superior hardening performance of the proposed design over the other two circuits.

  6. Alpha-particle-induced soft errors in high speed bipolar RAM

    International Nuclear Information System (INIS)

    Mitsusada, Kazumichi; Kato, Yukio; Yamaguchi, Kunihiko; Inadachi, Masaaki

    1980-01-01

    As bipolar RAM (Random Access Memory) has been improved to a fast acting and highly integrated device, the problems negligible in the past have become the ones that can not be ignored. The problem of a-particles emitted from the radioactive substances in semiconductor package materials should be specifically noticed, which cause soft errors. The authors have produced experimentally the special 1 kbit bipolar RAM to investigate its soft errors. The package used was the standard 16 pin dual in-line type, with which the practical system mounting test and a-particle irradiation test have been performed. The results showed the occurrence of soft errors at the average rate of about 1 bit/700 device hour. It is concluded that the cause was due to the a-particles emitted from the package materials, and at the same time, it was found that the rate of soft error occurrence was able to be greatly reduced by shielding a-particles. The error rate significantly increased with the decrease of the stand-by current of memory cells and with the accumulated charge determined by time constant. The mechanism of soft error was also investigated, for which an approximate model to estimate the error rate by means of the effective noise charge due to a-particles and of the amount of reversible charges of memory cells is shown to compare it with the experimental results. (Wakatsuki, Y.)

  7. Detecting Soft Errors in Stencil based Computations

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, V. [Univ. of Utah, Salt Lake City, UT (United States); Gopalkrishnan, G. [Univ. of Utah, Salt Lake City, UT (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  8. Formal Analysis of Soft Errors using Theorem Proving

    Directory of Open Access Journals (Sweden)

    Sofiène Tahar

    2013-07-01

    Full Text Available Modeling and analysis of soft errors in electronic circuits has traditionally been done using computer simulations. Computer simulations cannot guarantee correctness of analysis because they utilize approximate real number representations and pseudo random numbers in the analysis and thus are not well suited for analyzing safety-critical applications. In this paper, we present a higher-order logic theorem proving based method for modeling and analysis of soft errors in electronic circuits. Our developed infrastructure includes formalized continuous random variable pairs, their Cumulative Distribution Function (CDF properties and independent standard uniform and Gaussian random variables. We illustrate the usefulness of our approach by modeling and analyzing soft errors in commonly used dynamic random access memory sense amplifier circuits.

  9. An investigation into soft error detection efficiency at operating system level.

    Science.gov (United States)

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  10. An Investigation into Soft Error Detection Efficiency at Operating System Level

    Directory of Open Access Journals (Sweden)

    Seyyed Amir Asghari

    2014-01-01

    Full Text Available Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  11. SCHEME (Soft Control Human error Evaluation MEthod) for advanced MCR HRA

    International Nuclear Information System (INIS)

    Jang, Inseok; Jung, Wondea; Seong, Poong Hyun

    2015-01-01

    The Technique for Human Error Rate Prediction (THERP), Korean Human Reliability Analysis (K-HRA), Human Error Assessment and Reduction Technique (HEART), A Technique for Human Event Analysis (ATHEANA), Cognitive Reliability and Error Analysis Method (CREAM), and Simplified Plant Analysis Risk Human Reliability Assessment (SPAR-H) in relation to NPP maintenance and operation. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). They are still used for HRA in advanced MCRs even though the operating environment of advanced MCRs in NPPs has been considerably changed by the adoption of new human-system interfaces such as computer-based soft controls. Among the many features in advanced MCRs, soft controls are an important feature because the operation action in NPP advanced MCRs is performed by soft controls. Consequently, those conventional methods may not sufficiently consider the features of soft control execution human errors. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing the soft control task analysis and the literature reviews regarding widely accepted human error taxonomies. In this study, the framework of a HRA method for evaluating soft control execution human error in advanced MCRs is developed. First, the factors which HRA method in advanced MCRs should encompass are derived based on the literature review, and soft control task analysis. Based on the derived factors, execution HRA framework in advanced MCRs is developed mainly focusing on the features of soft control. Moreover, since most current HRA database deal with operation in conventional type of MCRs and are not explicitly designed to deal with digital HSI, HRA database are developed under lab scale simulation

  12. When soft controls get slippery: User interfaces and human error

    International Nuclear Information System (INIS)

    Stubler, W.F.; O'Hara, J.M.

    1998-01-01

    Many types of products and systems that have traditionally featured physical control devices are now being designed with soft controls--input formats appearing on computer-based display devices and operated by a variety of input devices. A review of complex human-machine systems found that soft controls are particularly prone to some types of errors and may affect overall system performance and safety. This paper discusses the application of design approaches for reducing the likelihood of these errors and for enhancing usability, user satisfaction, and system performance and safety

  13. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    Energy Technology Data Exchange (ETDEWEB)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan, E-mail: liushuhuan@mail.xjtu.edu.cn; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-21

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  14. Neutron detection using soft errors in dynamic random access memories

    International Nuclear Information System (INIS)

    Darambara, D.G.; Spyrou, N.M.

    1992-01-01

    The fact that energetic alpha particles have been observed to be capable of inducing single-event upsets in integrated circuit memories has become a topic of considerable interest in the past few years. One recognized difficulty with dynamic random access memory devices (dRAMs) is that the alpha-particle 'contamination' present within the dRAM encapsulating material interact sufficiently as to corrupt stored data. The authors essentially utilized the fact that these corruptions may be induced in dRAMs by the interaction of charged particles with the chip of the dRAM itself as a basis of a hardware system for neutron detection with a view to applications in neutron imaging and elemental analysis. The design incorporates a bank of dRAMs on which the particles are incident. Initially, these particles were alpha particles from an appropriate alpha-emitting source employed to assess system parameters. The sensitivity of the device to logic state upsets by ionizing radiation is a function of design and technology parameters, inducing storage node area, node capacitance, operating voltage, minority carrier lifetime, electric fields pattern in the bulk silicon, and specific device geometry. The soft error rate of the device in a given package depends on the flux of alphas, the energy spectrum, the distribution of incident angles, the target area, the total stored charge, the collection efficiency, the cell geometry, the supply voltage, the cycle and refreshing time, and the noise margin

  15. Neutron detection using soft errors in dynamic Random Access Memories

    International Nuclear Information System (INIS)

    Darambara, D.G.; Spyrou, N.M.

    1994-01-01

    The purpose of this paper is to present results from experiments that have been performed to show the memory cycle time dependence of the soft errors produced by the interaction of alpha particles with dynamic random access memory devices, with a view to using these as position sensitive detectors. Furthermore, a preliminary feasibility study being carried out indicates the use of dynamic RAMs as neutron detectors by the utilization of (n, α) capture reactions in a Li converter placed on the top of the active area of the memory chip. ((orig.))

  16. Soft errors in dynamic random access memories - a basis for dosimetry

    International Nuclear Information System (INIS)

    Haque, A.K.M.M.; Yates, J.; Stevens, D.

    1986-01-01

    The soft error rates of a number of 64k and 256k dRAMs from several manufacturers have been measured, employing a MC 68000 microprocessor. For this 'accelerated test' procedure, a 37 kBq (1 μCi) 241 Am alpha emitting source was used. Both 64k and 256k devices exhibited widely differing error rates. It was generally observed that the spread of errors over a particular device/manufacturer was much smaller than the differences between device families and manufacturers. Bit line errors formed a significant part of the total for 64k dRAMs, whereas in 256k dRAMs cell errors dominated; the latter also showed an enhanced sensitivity to integrated dose leading to total failure, and a time-dependent recovery. Although several theoretical models explain soft error mechanisms and predict responses which are compatible with our experimental results, it is considered that microdosimetric and track structure methods should be applied to the problem for its better appreciation. Finally, attention is drawn to the need for further studies of dRAMs, with a view to their use as digital dosemeters. (author)

  17. An empirical study on the basic human error probabilities for NPP advanced main control room operation using soft control

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Harbi, Mohamed Ali Salem Al; Lee, Seung Jun; Kang, Hyun Gook; Seong, Poong Hyun

    2013-01-01

    Highlights: ► The operation environment of MCRs in NPPs has changed by adopting new HSIs. ► The operation action in NPP Advanced MCRs is performed by soft control. ► Different basic human error probabilities (BHEPs) should be considered. ► BHEPs in a soft control operation environment are investigated empirically. ► This work will be helpful to verify if soft control has positive or negative effects. -- Abstract: By adopting new human–system interfaces that are based on computer-based technologies, the operation environment of main control rooms (MCRs) in nuclear power plants (NPPs) has changed. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, soft controls, and so on, are called Advanced MCRs. Among the many features in Advanced MCRs, soft controls are an important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, touch screens, and so on, operators can select a specific screen, then choose the controller, and finally manipulate the devices. However, because of the different interfaces between soft control and hardwired conventional type control, different basic human error probabilities (BHEPs) should be considered in the Human Reliability Analysis (HRA) for advanced MCRs. Although there are many HRA methods to assess human reliabilities, such as Technique for Human Error Rate Prediction (THERP), Accident Sequence Evaluation Program (ASEP), Human Error Assessment and Reduction Technique (HEART), Human Event Repository and Analysis (HERA), Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR), Cognitive Reliability and Error Analysis Method (CREAM), and so on, these methods have been applied to conventional MCRs, and they do not consider the new features of advance MCRs such as soft controls. As a result, there is an insufficient database for assessing human reliabilities in advanced

  18. An Analysis and Quantification Method of Human Errors of Soft Controls in Advanced MCRs

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jae Whan; Jang, Seung Cheol

    2011-01-01

    In this work, a method was proposed for quantifying human errors that may occur during operation executions using soft control. Soft controls of advanced main control rooms (MCRs) have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to define the human error modes and to quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests a modified K-HRA method for quantifying error probability

  19. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Wei He

    2016-01-01

    Full Text Available Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main parameters for raw soft error vulnerability of the module and coupling factors. Results indicate that the proposed method is feasible.

  20. Generalizing human error rates: A taxonomic approach

    International Nuclear Information System (INIS)

    Buffardi, L.; Fleishman, E.; Allen, J.

    1989-01-01

    It is well established that human error plays a major role in malfunctioning of complex, technological systems and in accidents associated with their operation. Estimates of the rate of human error in the nuclear industry range from 20-65% of all system failures. In response to this, the Nuclear Regulatory Commission has developed a variety of techniques for estimating human error probabilities for nuclear power plant personnel. Most of these techniques require the specification of the range of human error probabilities for various tasks. Unfortunately, very little objective performance data on error probabilities exist for nuclear environments. Thus, when human reliability estimates are required, for example in computer simulation modeling of system reliability, only subjective estimates (usually based on experts' best guesses) can be provided. The objective of the current research is to provide guidelines for the selection of human error probabilities based on actual performance data taken in other complex environments and applying them to nuclear settings. A key feature of this research is the application of a comprehensive taxonomic approach to nuclear and non-nuclear tasks to evaluate their similarities and differences, thus providing a basis for generalizing human error estimates across tasks. In recent years significant developments have occurred in classifying and describing tasks. Initial goals of the current research are to: (1) identify alternative taxonomic schemes that can be applied to tasks, and (2) describe nuclear tasks in terms of these schemes. Three standardized taxonomic schemes (Ability Requirements Approach, Generalized Information-Processing Approach, Task Characteristics Approach) are identified, modified, and evaluated for their suitability in comparing nuclear and non-nuclear power plant tasks. An agenda for future research and its relevance to nuclear power plant safety is also discussed

  1. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    OpenAIRE

    He, Wei; Wang, Yueke; Xing, Kefei; Yang, Jianwei

    2016-01-01

    Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main paramet...

  2. Human error mode identification for NPP main control room operations using soft controls

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jang, Seung-Cheol

    2011-01-01

    The operation environment of main control rooms (MCRs) in modern nuclear power plants (NPPs) has considerably changed over the years. Advanced MCRs, which have been designed by adapting digital and computer technologies, have simpler interfaces using large display panels, computerized displays, soft controls, computerized procedure systems, and so on. The actions for the NPP operations are performed using soft controls in advanced MCRs. Soft controls have different features from conventional controls. Operators need to navigate the screens to find indicators and controls and manipulate controls using a mouse, touch screens, and so on. Due to these different interfaces, different human errors should be considered in the human reliability analysis (HRA) for advanced MCRs. In this work, human errors that could occur during operation executions using soft controls were analyzed. This work classified the human errors in soft controls into six types, and the reasons that affect the occurrence of the human errors were also analyzed. (author)

  3. Quantitative estimation of the human error probability during soft control operations

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jung, Wondea

    2013-01-01

    Highlights: ► An HRA method to evaluate execution HEP for soft control operations was proposed. ► The soft control tasks were analyzed and design-related influencing factors were identified. ► An application to evaluate the effects of soft controls was performed. - Abstract: In this work, a method was proposed for quantifying human errors that can occur during operation executions using soft controls. Soft controls of advanced main control rooms have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to identify the human error modes and quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests an evaluation framework for quantifying the execution error probability using soft controls. In the application result, it was observed that the human error probabilities of soft controls showed both positive and negative results compared to the conventional controls according to the design quality of advanced main control rooms

  4. Basic human error probabilities in advanced MCRs when using soft control

    International Nuclear Information System (INIS)

    Jang, In Seok; Seong, Poong Hyun; Kang, Hyun Gook; Lee, Seung Jun

    2012-01-01

    In a report on one of the renowned HRA methods, Technique for Human Error Rate Prediction (THERP), it is pointed out that 'The paucity of actual data on human performance continues to be a major problem for estimating HEPs and performance times in nuclear power plant (NPP) task'. However, another critical difficulty is that most current HRA databases deal with operation in conventional type of MCRs. With the adoption of new human system interfaces that are based on computer based technologies, the operation environment of MCRs in NPPs has changed. The MCRs including these digital and computer technologies, such as large display panels, computerized procedures, soft controls, and so on, are called advanced MCRs. Because of the different interfaces, different Basic Human Error Probabilities (BHEPs) should be considered in human reliability analyses (HRAs) for advanced MCRs. This study carries out an empirical analysis of human error considering soft controls. The aim of this work is not only to compile a database using the simulator for advanced MCRs but also to compare BHEPs with those of a conventional MCR database

  5. Multicenter Assessment of Gram Stain Error Rates.

    Science.gov (United States)

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  6. Soft error modeling and analysis of the Neutron Intercepting Silicon Chip (NISC)

    International Nuclear Information System (INIS)

    Celik, Cihangir; Unlue, Kenan; Narayanan, Vijaykrishnan; Irwin, Mary J.

    2011-01-01

    Soft errors are transient errors caused due to excess charge carriers induced primarily by external radiations in the semiconductor devices. Soft error phenomena could be used to detect thermal neutrons with a neutron monitoring/detection system by enhancing soft error occurrences in the memory devices. This way, one can convert all semiconductor memory devices into neutron detection systems. Such a device is being developed at The Pennsylvania State University and named Neutron Intercepting Silicon Chip (NISC). The NISC is envisioning a miniature, power efficient, and active/passive operation neutron sensor/detector system. NISC aims to achieve this goal by introducing 10 B-enriched Borophosphosilicate Glass (BPSG) insulation layers in the semiconductor memories. In order to model and analyze the NISC, an analysis tool using Geant4 as the transport and tracking engine is developed for the simulation of the charged particle interactions in the semiconductor memory model, named NISC Soft Error Analysis Tool (NISCSAT). A simple model with 10 B-enriched layer on top of the lumped silicon region is developed in order to represent the semiconductor memory node. Soft error probability calculations were performed via the NISCSAT with both single node and array configurations to investigate device scaling by using different node dimensions in the model. Mono-energetic, mono-directional thermal and fast neutrons are used as the neutron sources. Soft error contribution due to the BPSG layer is also investigated with different 10 B contents and the results are presented in this paper.

  7. A Case for Soft Error Detection and Correction in Computational Chemistry.

    Science.gov (United States)

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  8. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  9. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    International Nuclear Information System (INIS)

    Aljneibi, Hanan Salah Ali; Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun

    2015-01-01

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation

  10. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    Energy Technology Data Exchange (ETDEWEB)

    Aljneibi, Hanan Salah Ali [Khalifa Univ., Abu Dhabi (United Arab Emirates); Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2015-10-15

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation.

  11. Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMS

    International Nuclear Information System (INIS)

    Diehl, S.E.; Ochoa, A. Jr.; Dressendorfer, P.V.; Koga, R.; Kolasinski, W.A.

    1982-06-01

    Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors

  12. Soft factors have an empirically testifiable effect on rating grade

    Directory of Open Access Journals (Sweden)

    Thomas Laufer

    2011-01-01

    Full Text Available The conclusions herein contain the summary of the results of an empirical survey in proof of the effects of soft factors on corporate rating grade.In the effort, three different software applications have been used. By means of the applications, the soft factors in corporate ratings previously identified in a related effort have been assessed for their impacts. That means all other applicable soft factors have been treated in a neutral manner.As a result based on assessments supplied by the three applications, weighted effect has been determined of soft factors, allowing to compile priority charts for the deployment of the factors as a targeted marketing tool. The charts also include the respective positive and a negative effects of hard factors.

  13. 45 CFR 98.100 - Error Rate Report.

    Science.gov (United States)

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... the total dollar amount of payments made in the sample); the average amount of improper payment; and... not received. (e) Costs of Preparing the Error Rate Report—Provided the error rate calculations and...

  14. Technological Advancements and Error Rates in Radiation Therapy Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States)

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  15. Technological Advancements and Error Rates in Radiation Therapy Delivery

    International Nuclear Information System (INIS)

    Margalit, Danielle N.; Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K.

    2011-01-01

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)–conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women’s Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher’s exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01–0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08–0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  16. A Survey of Soft-Error Mitigation Techniques for Non-Volatile Memories

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2017-02-01

    Full Text Available Non-volatile memories (NVMs offer superior density and energy characteristics compared to the conventional memories; however, NVMs suffer from severe reliability issues that can easily eclipse their energy efficiency advantages. In this paper, we survey architectural techniques for improving the soft-error reliability of NVMs, specifically PCM (phase change memory and STT-RAM (spin transfer torque RAM. We focus on soft-errors, such as resistance drift and write disturbance, in PCM and read disturbance and write failures in STT-RAM. By classifying the research works based on key parameters, we highlight their similarities and distinctions. We hope that this survey will underline the crucial importance of addressing NVM reliability for ensuring their system integration and will be useful for researchers, computer architects and processor designers.

  17. Low delay and area efficient soft error correction in arbitration logic

    Science.gov (United States)

    Sugawara, Yutaka

    2013-09-10

    There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.

  18. High strain-rate soft material characterization via inertial cavitation

    Science.gov (United States)

    Estrada, Jonathan B.; Barajas, Carlos; Henann, David L.; Johnsen, Eric; Franck, Christian

    2018-03-01

    Mechanical characterization of soft materials at high strain-rates is challenging due to their high compliance, slow wave speeds, and non-linear viscoelasticity. Yet, knowledge of their material behavior is paramount across a spectrum of biological and engineering applications from minimizing tissue damage in ultrasound and laser surgeries to diagnosing and mitigating impact injuries. To address this significant experimental hurdle and the need to accurately measure the viscoelastic properties of soft materials at high strain-rates (103-108 s-1), we present a minimally invasive, local 3D microrheology technique based on inertial microcavitation. By combining high-speed time-lapse imaging with an appropriate theoretical cavitation framework, we demonstrate that this technique has the capability to accurately determine the general viscoelastic material properties of soft matter as compliant as a few kilopascals. Similar to commercial characterization algorithms, we provide the user with significant flexibility in evaluating several constitutive laws to determine the most appropriate physical model for the material under investigation. Given its straightforward implementation into most current microscopy setups, we anticipate that this technique can be easily adopted by anyone interested in characterizing soft material properties at high loading rates including hydrogels, tissues and various polymeric specimens.

  19. Soft black hole absorption rates as conservation laws

    International Nuclear Information System (INIS)

    Avery, Steven G.; Schwab, Burkhard UniversityW.

    2017-01-01

    The absorption rate of low-energy, or soft, electromagnetic radiation by spherically symmetric black holes in arbitrary dimensions is shown to be fixed by conservation of energy and large gauge transformations. We interpret this result as the explicit realization of the Hawking-Perry-Strominger Ward identity for large gauge transformations in the background of a non-evaporating black hole. Along the way we rederive and extend previous analytic results regarding the absorption rate for the minimal scalar and the photon.

  20. Soft black hole absorption rates as conservation laws

    Energy Technology Data Exchange (ETDEWEB)

    Avery, Steven G. [Brown University, Department of Physics,182 Hope St, Providence, RI, 02912 (United States); Michigan State University, Department of Physics and Astronomy,East Lansing, MI, 48824 (United States); Schwab, Burkhard UniversityW. [Harvard University, Center for Mathematical Science and Applications,1 Oxford St, Cambridge, MA, 02138 (United States)

    2017-04-10

    The absorption rate of low-energy, or soft, electromagnetic radiation by spherically symmetric black holes in arbitrary dimensions is shown to be fixed by conservation of energy and large gauge transformations. We interpret this result as the explicit realization of the Hawking-Perry-Strominger Ward identity for large gauge transformations in the background of a non-evaporating black hole. Along the way we rederive and extend previous analytic results regarding the absorption rate for the minimal scalar and the photon.

  1. Logical error rate scaling of the toric code

    International Nuclear Information System (INIS)

    Watson, Fern H E; Barrett, Sean D

    2014-01-01

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)

  2. Radiation effects and soft errors in integrated circuits and electronic devices

    CERN Document Server

    Fleetwood, D M

    2004-01-01

    This book provides a detailed treatment of radiation effects in electronic devices, including effects at the material, device, and circuit levels. The emphasis is on transient effects caused by single ionizing particles (single-event effects and soft errors) and effects produced by the cumulative energy deposited by the radiation (total ionizing dose effects). Bipolar (Si and SiGe), metal-oxide-semiconductor (MOS), and compound semiconductor technologies are discussed. In addition to considering the specific issues associated with high-performance devices and technologies, the book includes th

  3. Process error rates in general research applications to the Human ...

    African Journals Online (AJOL)

    Objective. To examine process error rates in applications for ethics clearance of health research. Methods. Minutes of 586 general research applications made to a human health research ethics committee (HREC) from April 2008 to March 2009 were examined. Rates of approval were calculated and reasons for requiring ...

  4. FPGAs and parallel architectures for aerospace applications soft errors and fault-tolerant design

    CERN Document Server

    Rech, Paolo

    2016-01-01

    This book introduces the concepts of soft errors in FPGAs, as well as the motivation for using commercial, off-the-shelf (COTS) FPGAs in mission-critical and remote applications, such as aerospace.  The authors describe the effects of radiation in FPGAs, present a large set of soft-error mitigation techniques that can be applied in these circuits, as well as methods for qualifying these circuits under radiation.  Coverage includes radiation effects in FPGAs, fault-tolerant techniques for FPGAs, use of COTS FPGAs in aerospace applications, experimental data of FPGAs under radiation, FPGA embedded processors under radiation, and fault injection in FPGAs. Since dedicated parallel processing architectures such as GPUs have become more desirable in aerospace applications due to high computational power, GPU analysis under radiation is also discussed. ·         Discusses features and drawbacks of reconfigurability methods for FPGAs, focused on aerospace applications; ·         Explains how radia...

  5. Individual Differences and Rating Errors in First Impressions of Psychopathy

    Directory of Open Access Journals (Sweden)

    Christopher T. A. Gillen

    2016-10-01

    Full Text Available The current study is the first to investigate whether individual differences in personality are related to improved first impression accuracy when appraising psychopathy in female offenders from thin-slices of information. The study also investigated the types of errors laypeople make when forming these judgments. Sixty-seven undergraduates assessed 22 offenders on their level of psychopathy, violence, likability, and attractiveness. Psychopathy rating accuracy improved as rater extroversion-sociability and agreeableness increased and when neuroticism and lifestyle and antisocial characteristics decreased. These results suggest that traits associated with nonverbal rating accuracy or social functioning may be important in threat detection. Raters also made errors consistent with error management theory, suggesting that laypeople overappraise danger when rating psychopathy.

  6. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    Directory of Open Access Journals (Sweden)

    Wei He

    Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.

  7. A critique of recent models for human error rate assessment

    International Nuclear Information System (INIS)

    Apostolakis, G.E.

    1988-01-01

    This paper critically reviews two groups of models for assessing human error rates under accident conditions. The first group, which includes the US Nuclear Regulatory Commission (NRC) handbook model and the human cognitive reliability (HCR) model, considers as fundamental the time that is available to the operators to act. The second group, which is represented by the success likelihood index methodology multiattribute utility decomposition (SLIM-MAUD) model, relies on ratings of the human actions with respect to certain qualitative factors and the subsequent derivation of error rates. These models are evaluated with respect to two criteria: the treatment of uncertainties and the internal coherence of the models. In other words, this evaluation focuses primarily on normative aspects of these models. The principal findings are as follows: (1) Both of the time-related models provide human error rates as a function of the available time for action and the prevailing conditions. However, the HCR model ignores the important issue of state-of-knowledge uncertainties, dealing exclusively with stochastic uncertainty, whereas the model presented in the NRC handbook handles both types of uncertainty. (2) SLIM-MAUD provides a highly structured approach for the derivation of human error rates under given conditions. However, the treatment of the weights and ratings in this model is internally inconsistent. (author)

  8. Human error recovery failure probability when using soft controls in computerized control rooms

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Seong, Poong Hyun; Jung, Wondea

    2014-01-01

    Many literatures categorized recovery process into three phases; detection of problem situation, explanation of problem causes or countermeasures against problem, and end of recovery. Although the focus of recovery promotion has been on categorizing recovery phases and modeling recovery process, research related to human recovery failure probabilities has not been perform actively. On the other hand, a few study regarding recovery failure probabilities were implemented empirically. Summarizing, researches that have performed so far have several problems in terms of use in human reliability analysis (HRA). By adopting new human-system interfaces that are based on computer-based technologies, the operation environment of MCRs in NPPs has changed from conventional MCRs to advanced MCRs. Because of the different interfaces between conventional and advanced MCRs, different recovery failure probabilities should be considered in the HRA for advanced MCRs. Therefore, this study carries out an empirical analysis of human error recovery probabilities under an advanced MCR mockup called compact nuclear simulator (CNS). The aim of this work is not only to compile a recovery failure probability database using the simulator for advanced MCRs but also to collect recovery failure probability according to defined human error modes to compare that which human error mode has highest recovery failure probability. The results show that recovery failure probability regarding wrong screen selection was lowest among human error modes, which means that most of human error related to wrong screen selection can be recovered. On the other hand, recovery failure probabilities of operation selection omission and delayed operation were 1.0. These results imply that once subject omitted one task in the procedure, they have difficulties finding and recovering their errors without supervisor's assistance. Also, wrong screen selection had an effect on delayed operation. That is, wrong screen

  9. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    Science.gov (United States)

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  10. The 95% confidence intervals of error rates and discriminant coefficients

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2015-02-01

    Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

  11. Assessment of salivary flow rate: biologic variation and measure error.

    NARCIS (Netherlands)

    Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.

    2004-01-01

    OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated

  12. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  13. Soft Tissue Strain Rates in Side-Blast Incidents

    Science.gov (United States)

    2014-11-02

    increase of strain rate is known to cause the stiffening of soft connective tissues ( Haut and Haut 1997 [49]; Panjabi et al. 1998 [50]; Crisco et al...Réseau Québécois de Calcul de Haute Performance, with a peak compute performance of 27 596 GFlops). Figure 2: Torso motion imposed in the model...Yan YP. 2003. Mechanical properties of nasal fascia and periosteum. Clinical Biomechanics. 18:760-764. [49] Haut TL, Haut RC. 1997. The state of

  14. Estimating error rates for firearm evidence identifications in forensic science

    Science.gov (United States)

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  15. Study on a new framework of Human Reliability Analysis to evaluate soft control execution error in advanced MCRs of NPPs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2016-01-01

    Highlights: • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • The operation action in NPP Advanced MCRs is performed by soft control. • New HRA framework should be considered in the HRA for advanced MCRs. • HRA framework for evaluation of soft control execution human error is suggested. • Suggested method will be helpful to analyze human reliability in advance MCRs. - Abstract: Since the Three Mile Island (TMI)-2 accident, human error has been recognized as one of the main causes of Nuclear Power Plant (NPP) accidents, and numerous studies related to Human Reliability Analysis (HRA) have been carried out. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). However, the operating environment of MCRs in NPPs has changed with the adoption of new Human-System Interfaces (HSIs) that are based on computer-based technologies. The MCRs that include these digital technologies, such as large display panels, computerized procedures, and soft controls, are called advanced MCRs. Among the many features of advanced MCRs, soft controls are a particularly important feature because operating actions in NPP advanced MCRs are performed by soft control. Due to the differences in interfaces between soft control and hardwired conventional type control, different Human Error Probabilities (HEPs) and a new HRA framework should be considered in the HRA for advanced MCRs. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing a soft control task analysis and the literature regarding widely accepted human error taxonomies is reviewed. Moreover, since most current HRA databases deal with operation in conventional MCRs and are not explicitly designed to deal with digital HSIs, empirical analysis of human error and error recovery considering soft controls under an advanced MCR mockup are carried out to collect human error data, which is

  16. Error rate performance of narrowband multilevel CPFSK signals

    Science.gov (United States)

    Ekanayake, N.; Fonseka, K. J. P.

    1987-04-01

    The paper presents a relatively simple method for analyzing the effect of IF filtering on the performance of multilevel FM signals. Using this method, the error rate performance of narrowband FM signals is analyzed for three different detection techniques, namely limiter-discriminator detection, differential detection and coherent detection followed by differential decoding. The symbol error probabilities are computed for a Gaussian IF filter and a second-order Butterworth IF filter. It is shown that coherent detection and differential decoding yields better performance than limiter-discriminator detection and differential detection, whereas two noncoherent detectors yield approximately identical performance.

  17. Aniseikonia quantification: error rate of rule of thumb estimation.

    Science.gov (United States)

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  18. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  19. The nearest neighbor and the bayes error rates.

    Science.gov (United States)

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.

  20. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  1. Modeling of Bit Error Rate in Cascaded 2R Regenerators

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper

    2006-01-01

    and the regenerating nonlinearity is investigated. It is shown that an increase in nonlinearity can compensate for an increase in noise figure or decrease in signal power. Furthermore, the influence of the improvement in signal extinction ratio along the cascade and the importance of choosing the proper threshold......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...

  2. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  3. Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access

    KAUST Repository

    Zafar, Ammar

    2012-12-29

    In this paper, we present an optimal resource allocation scheme (ORA) for an all-participate(AP) cognitive relay network that minimizes the symbol error rate (SER). The SER is derived and different constraints are considered on the system. We consider the cases of both individual and global power constraints, individual constraints only and global constraints only. Numerical results show that the ORA scheme outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).

  4. An empirical study on the human error recovery failure probability when using soft controls in NPP advanced MCRs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2014-01-01

    Highlights: • Many researchers have tried to understand human recovery process or step. • Modeling human recovery process is not sufficient to be applied to HRA. • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • Recovery failure probability in a soft control operation environment is investigated. • Recovery failure probability here would be important evidence for expert judgment. - Abstract: It is well known that probabilistic safety assessments (PSAs) today consider not just hardware failures and environmental events that can impact upon risk, but also human error contributions. Consequently, the focus on reliability and performance management has been on the prevention of human errors and failures rather than the recovery of human errors. However, the recovery of human errors is as important as the prevention of human errors and failures for the safe operation of nuclear power plants (NPPs). For this reason, many researchers have tried to find a human recovery process or step. However, modeling the human recovery process is not sufficient enough to be applied to human reliability analysis (HRA), which requires human error and recovery probabilities. In this study, therefore, human error recovery failure probabilities based on predefined human error modes were investigated by conducting experiments in the operation mockup of advanced/digital main control rooms (MCRs) in NPPs. To this end, 48 subjects majoring in nuclear engineering participated in the experiments. In the experiments, using the developed accident scenario based on tasks from the standard post trip action (SPTA), the steam generator tube rupture (SGTR), and predominant soft control tasks, which are derived from the loss of coolant accident (LOCA) and the excess steam demand event (ESDE), all error detection and recovery data based on human error modes were checked with the performance sheet and the statistical analysis of error recovery/detection was then

  5. Error Recovery Properties and Soft Decoding of Quasi-Arithmetic Codes

    Directory of Open Access Journals (Sweden)

    Christine Guillemot

    2007-08-01

    Full Text Available This paper first introduces a new set of aggregated state models for soft-input decoding of quasi arithmetic (QA codes with a termination constraint. The decoding complexity with these models is linear with the sequence length. The aggregation parameter controls the tradeoff between decoding performance and complexity. It is shown that close-to-optimal decoding performance can be obtained with low values of the aggregation parameter, that is, with a complexity which is significantly reduced with respect to optimal QA bit/symbol models. The choice of the aggregation parameter depends on the synchronization recovery properties of the QA codes. This paper thus describes a method to estimate the probability mass function (PMF of the gain/loss of symbols following a single bit error (i.e., of the difference between the number of encoded and decoded symbols. The entropy of the gain/loss turns out to be the average amount of information conveyed by a length constraint on both the optimal and aggregated state models. This quantity allows us to choose the value of the aggregation parameter that will lead to close-to-optimal decoding performance. It is shown that the optimum position for the length constraint is not the last time instant of the decoding process. This observation leads to the introduction of a new technique for robust decoding of QA codes with redundancy which turns out to outperform techniques based on the concept of forbidden symbol.

  6. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    Energy Technology Data Exchange (ETDEWEB)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross [Department of Radiation Oncology, Brigham and Women' s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States)

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID

  7. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    International Nuclear Information System (INIS)

    Yip, Stephen; Rottmann, Joerg; Berbeco, Ross

    2014-01-01

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID

  8. Error-rate performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2011-09-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.

  9. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  10. A statistical approach to estimating effects of performance shaping factors on human error probabilities of soft controls

    International Nuclear Information System (INIS)

    Kim, Yochan; Park, Jinkyun; Jung, Wondea; Jang, Inseok; Hyun Seong, Poong

    2015-01-01

    Despite recent efforts toward data collection for supporting human reliability analysis, there remains a lack of empirical basis in determining the effects of performance shaping factors (PSFs) on human error probabilities (HEPs). To enhance the empirical basis regarding the effects of the PSFs, a statistical methodology using a logistic regression and stepwise variable selection was proposed, and the effects of the PSF on HEPs related with the soft controls were estimated through the methodology. For this estimation, more than 600 human error opportunities related to soft controls in a computerized control room were obtained through laboratory experiments. From the eight PSF surrogates and combinations of these variables, the procedure quality, practice level, and the operation type were identified as significant factors for screen switch and mode conversion errors. The contributions of these significant factors to HEPs were also estimated in terms of a multiplicative form. The usefulness and limitation of the experimental data and the techniques employed are discussed herein, and we believe that the logistic regression and stepwise variable selection methods will provide a way to estimate the effects of PSFs on HEPs in an objective manner. - Highlights: • It is necessary to develop an empirical basis for the effects of the PSFs on the HEPs. • A statistical method using a logistic regression and variable selection was proposed. • The effects of PSFs on the HEPs of soft controls were empirically investigated. • The significant factors were identified and their effects were estimated

  11. A Feasibility Study for Measuring Accurate Chest Compression Depth and Rate on Soft Surfaces Using Two Accelerometers and Spectral Analysis

    Directory of Open Access Journals (Sweden)

    Sofía Ruiz de Gauna

    2016-01-01

    Full Text Available Background. Cardiopulmonary resuscitation (CPR feedback devices are being increasingly used. However, current accelerometer-based devices overestimate chest displacement when CPR is performed on soft surfaces, which may lead to insufficient compression depth. Aim. To assess the performance of a new algorithm for measuring compression depth and rate based on two accelerometers in a simulated resuscitation scenario. Materials and Methods. Compressions were provided to a manikin on two mattresses, foam and sprung, with and without a backboard. One accelerometer was placed on the chest and the second at the manikin’s back. Chest displacement and mattress displacement were calculated from the spectral analysis of the corresponding acceleration every 2 seconds and subtracted to compute the actual sternal-spinal displacement. Compression rate was obtained from the chest acceleration. Results. Median unsigned error in depth was 2.1 mm (4.4%. Error was 2.4 mm in the foam and 1.7 mm in the sprung mattress (p<0.001. Error was 3.1/2.0 mm and 1.8/1.6 mm with/without backboard for foam and sprung, respectively (p<0.001. Median error in rate was 0.9 cpm (1.0%, with no significant differences between test conditions. Conclusion. The system provided accurate feedback on chest compression depth and rate on soft surfaces. Our solution compensated mattress displacement, avoiding overestimation of compression depth when CPR is performed on soft surfaces.

  12. Low dose rate gamma ray induced loss and data error rate of multimode silica fibre links

    International Nuclear Information System (INIS)

    Breuze, G.; Fanet, H.; Serre, J.

    1993-01-01

    Fiber optics data transmission from numerous multiplexed sensors, is potentially attractive for nuclear plant applications. Multimode silica fiber behaviour during steady state gamma ray exposure is studied as a joint programme between LETI CE/SACLAY and EDF Renardieres: transmitted optical power and bit error rate have been measured on a 100 m optical fiber

  13. Human error and the associated recovery probabilities for soft control being used in the advanced MCRs of NPPs

    International Nuclear Information System (INIS)

    Jang, Inseok; Jung, Wondea; Seong, Poong Hyun

    2016-01-01

    Highlights: • The operation environment of MCRs in NPPs has changed by adopting digital HSIs. • Most current HRA databases are not explicitly designed to deal with digital HSI. • Empirical analysis for new HRA DB under an advanced MCR mockup are carried. • It is expected that the results can be used for advanced MCR HRA. - Abstract: Since the Three Mile Island (TMI)-2 accident, human error has been recognized as one of the main causes of Nuclear Power Plant (NPP) accidents, and numerous studies related to Human Reliability Analysis (HRA) have been carried out. Most of these studies were focused on considering the conventional Main Control Room (MCR) environment. However, the operating environment of MCRs in NPPs has changed with the adoption of new human-system interfaces (HSI) largely based on up-to-date digital technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called advanced MCRs. Among the many features of advanced MCRs, soft controls are a particularly important because operating actions in advanced MCRs are performed by soft control. Due to the difference in interfaces between soft control and hardwired conventional controls, different HEP should be used in the HRA for advanced MCRs. Unfortunately, most current HRA databases deal with operations in conventional MCRs and are not explicitly designed to deal with digital Human System Interface (HSI). For this reason, empirical human error and the associated error recovery probabilities were collected from the mockup of an advanced MCR equipped with soft controls. To this end, small-scaled experiments are conducted with 48 graduated students in the department of nuclear engineering in Korea Advanced Institute of Science and Technology (KAIST) are participated, and accident scenarios are designed with respect to the typical Design Basis Accidents (DBAs) in NPPs, such as Steam Generator Tube Rupture

  14. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  15. Analysis of gross error rates in operation of commercial nuclear power stations

    International Nuclear Information System (INIS)

    Joos, D.W.; Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    Experience in operation of US commercial nuclear power plants is reviewed over a 25-month period. The reports accumulated in that period on events of human error and component failure are examined to evaluate gross operator error rates. The impact of such errors on plant operation and safety is examined through the use of proper taxonomies of error, tasks and failures. Four categories of human errors are considered; namely, operator, maintenance, installation and administrative. The computed error rates are used to examine appropriate operator models for evaluation of operator reliability. Human error rates are found to be significant to a varying degree in both BWR and PWR. This emphasizes the import of considering human factors in safety and reliability analysis of nuclear systems. The results also indicate that human errors, and especially operator errors, do indeed follow the exponential reliability model. (Auth.)

  16. Analyzing Reliability and Performance Trade-Offs of HLS-Based Designs in SRAM-Based FPGAs Under Soft Errors

    Science.gov (United States)

    Tambara, Lucas Antunes; Tonfat, Jorge; Santos, André; Kastensmidt, Fernanda Lima; Medina, Nilberto H.; Added, Nemitala; Aguiar, Vitor A. P.; Aguirre, Fernando; Silveira, Marcilei A. G.

    2017-02-01

    The increasing system complexity of FPGA-based hardware designs and shortening of time-to-market have motivated the adoption of new designing methodologies focused on addressing the current need for high-performance circuits. High-Level Synthesis (HLS) tools can generate Register Transfer Level (RTL) designs from high-level software programming languages. These tools have evolved significantly in recent years, providing optimized RTL designs, which can serve the needs of safety-critical applications that require both high performance and high reliability levels. However, a reliability evaluation of HLS-based designs under soft errors has not yet been presented. In this work, the trade-offs of different HLS-based designs in terms of reliability, resource utilization, and performance are investigated by analyzing their behavior under soft errors and comparing them to a standard processor-based implementation in an SRAM-based FPGA. Results obtained from fault injection campaigns and radiation experiments show that it is possible to increase the performance of a processor-based system up to 5,000 times by changing its architecture with a small impact in the cross section (increasing up to 8 times), and still increasing the Mean Workload Between Failures (MWBF) of the system.

  17. Comparing sports vision among three groups of soft tennis adolescent athletes: Normal vision, refractive errors with and without correction

    Directory of Open Access Journals (Sweden)

    Shih-Tsun Chang

    2015-01-01

    Full Text Available Background: The effect of correcting static vision on sports vision is still not clear. Aim: To examine whether sports vision (depth perception [DP], dynamic visual acuity [DVA], eye movement [EM], peripheral vision [PV], and momentary vision [MV], were different among soft tennis adolescent athletes with normal vision (Group A, with refractive error and corrected with (Group B and without eyeglasses (Group C. Setting and Design: A cross-section study was conducted. Soft tennis athletes aged 10–13 who played softball tennis for 2–5 years, and who were without any ocular diseases and without visual training for the past 3 months were recruited. Materials and Methods: DPs were measured in an absolute deviation (mm between a moving rod and fixing rod (approaching at 25 mm/s, receding at 25 mm/s, approaching at 50 mm/s, receding at 50 mm/s using electric DP tester. A smaller deviation represented better DP. DVA, EM, PV, and MV were measured on a scale from 1 (worse to 10 (best using ATHLEVISION software. Statistical Analysis: Chi-square test and Kruskal–Wallis test was used to compare the data among the three study groups. Results: A total of 73 athletes (37 in Group A, 8 in Group B, 28 in Group C were enrolled in this study. All four items of DP showed significant difference among the three study groups (P = 0.0051, 0.0004, 0.0095, 0.0021. PV displayed significant difference among the three study groups (P = 0.0044. There was no significant difference in DVA, EM, and MV among the three study groups. Conclusions: Significant better DP and PV were seen among soft tennis adolescent athletes with normal vision than those with refractive error regardless whether they had eyeglasses corrected. On the other hand, DVA, EM, and MV were similar among the three study groups.

  18. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    Science.gov (United States)

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  19. A Smart Soft Sensor Predicting Feedwater Flow Rate

    International Nuclear Information System (INIS)

    Yang, Heon Young; Na, Man Gyun

    2009-01-01

    Since we evaluate thermal nuclear reactor power with secondary system calorimetric calculations based on feedwater flow rate measurements, we need to measure the feedwater flow rate accurately. The Venturi flow meters that are being used to measure the feedwater flow rate in most pressurized water reactors (PWRs) measure the flow rate by developing a differential pressure across a physical flow restriction. The differential pressure is then multiplied by a calibration factor that depends on various flow conditions in order to calculate the feedwater flow rate. The calibration factor is determined by the feedwater temperature and pressure. However, Venturi meters cause a buildup of corrosion products near the orifice of the meter. This fouling increases the measured pressure drop across the meter, thereby causing an overestimation of the feedwater flow rate

  20. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    Science.gov (United States)

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  1. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Directory of Open Access Journals (Sweden)

    Lukas Falat

    2016-01-01

    Full Text Available This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  2. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Science.gov (United States)

    Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450

  3. Efficient Error Detection in Soft Data Fusion for Cooperative Spectrum Sensing

    KAUST Repository

    Saqib Bhatti, Dost Muhammad

    2018-03-18

    The primary objective of cooperative spectrum sensing (CSS) is to determine whether a particular spectrum is occupied by a licensed user or not, so that unlicensed users called secondary users (SUs) can utilize that spectrum, if it is not occupied. For CSS, all SUs report their sensing information through reporting channel to the central base station called fusion center (FC). During transmission, some of the SUs are subjected to fading and shadowing, due to which the overall performance of CSS is degraded. We have proposed an algorithm which uses error detection technique on sensing measurement of all SUs. Each SU is required to re-transmit the sensing data to the FC, if error is detected on it. Our proposed algorithm combines the sensing measurement of limited number of SUs. Using Proposed algorithm, we have achieved the improved probability of detection (PD) and throughput. The simulation results compare the proposed algorithm with conventional scheme.

  4. Computing in the presence of soft bit errors. [caused by single event upset on spacecraft

    Science.gov (United States)

    Rasmussen, R. D.

    1984-01-01

    It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.

  5. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  6. Soft errors in 10-nm-scale magnetic tunnel junctions exposed to high-energy heavy-ion radiation

    Science.gov (United States)

    Kobayashi, Daisuke; Hirose, Kazuyuki; Makino, Takahiro; Onoda, Shinobu; Ohshima, Takeshi; Ikeda, Shoji; Sato, Hideo; Inocencio Enobio, Eli Christopher; Endoh, Tetsuo; Ohno, Hideo

    2017-08-01

    The influences of various types of high-energy heavy-ion radiation on 10-nm-scale CoFeB-MgO magnetic tunnel junctions with a perpendicular easy axis have been investigated. In addition to possible latent damage, which has already been pointed out in previous studies, high-energy heavy-ion bombardments demonstrated that the magnetic tunnel junctions may exhibit clear flips between their high- and low-resistance states designed for a digital bit 1 or 0. It was also demonstrated that flipped magnetic tunnel junctions still may provide proper memory functions such as read, write, and hold capabilities. These two findings proved that high-energy heavy ions can produce recoverable bit flips in magnetic tunnel junctions, i.e., soft errors. Data analyses suggested that the resistance flips stem from magnetization reversals of the ferromagnetic layers and that each of them is caused by a single strike of heavy ions. It was concurrently found that an ion strike does not always result in a flip, suggesting a stochastic process behind the flip. Experimental data also showed that the flip phenomenon is dependent on the device and heavy-ion characteristics. Among them, the diameter of the device and the linear energy transfer of the heavy ions were revealed as the key parameters. From their dependences, the physical mechanism behind the flip was discussed. It is likely that a 10-nm-scale ferromagnetic disk loses its magnetization due to a local temperature increase induced by a single strike of heavy ions; this demagnetization is followed by a cooling period associated with a possible stochastic recovery process. On the basis of this hypothesis, a simple analytical model was developed, and it was found that the model accounts for the results reasonably well. This model also predicted that magnetic tunnel junctions provide sufficiently high soft-error reliability for use in space, highlighting their advantage over their counterpart conventional semiconductor memories.

  7. Soft X ray spectrometry at high count rates

    International Nuclear Information System (INIS)

    Blanc, P.; Brouquet, P.; Uhre, N.

    1978-06-01

    Two modifications of the classical method of X-ray spectrometry by a semi-conductor diode permit a count rate of 10 5 c/s with an energy resolution of 350 eV. With a specially constructed pulse height analyzer, this detector can measure four spectra of 5 ms each, in the range of 1-30 keV, during a plasma shot

  8. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    Science.gov (United States)

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  9. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    Science.gov (United States)

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  10. Bit Error Rate Minimizing Channel Shortening Equalizers for Single Carrier Cyclic Prefixed Systems

    National Research Council Canada - National Science Library

    Martin, Richard K; Vanbleu, Koen; Ysebaert, Geert

    2007-01-01

    .... Previous work on channel shortening has largely been in the context of digital subscriber lines, a wireline system that allows bit allocation, thus it has focused on maximizing the bit rate for a given bit error rate (BER...

  11. Framed bit error rate testing for 100G ethernet equipment

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    rate. As the need for 100 Gigabit Ethernet equipment rises, so does the need for equipment, which can properly test these systems during development, deployment and use. This paper presents early results from a work-in-progress academia-industry collaboration project and elaborates on the challenges...

  12. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency.

    Science.gov (United States)

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 2nd graders and 974 3rd graders. Participants were assessed using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and the Woodcock Reading Mastery Test (WRMT) Passage Comprehension subtest. Results from this study further illuminate the significant relationships between error rate, oral reading fluency, and reading comprehension performance, and grade-specific guidelines for appropriate error rate levels. Low oral reading fluency and high error rates predict the level of passage comprehension performance. For second grade students below benchmark, a fall assessment error rate of 28% predicts that student comprehension performance will be below average. For third grade students below benchmark, the fall assessment cut point is 14%. Instructional implications of the findings are discussed.

  13. Dispensing error rate after implementation of an automated pharmacy carousel system.

    Science.gov (United States)

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  14. Double symbol error rates for differential detection of narrow-band FM

    Science.gov (United States)

    Simon, M. K.

    1985-01-01

    This paper evaluates the double symbol error rate (average probability of two consecutive symbol errors) in differentially detected narrow-band FM. Numerical results are presented for the special case of MSK with a Gaussian IF receive filter. It is shown that, not unlike similar results previously obtained for the single error probability of such systems, large inaccuracies in predicted performance can occur when intersymbol interference is ignored.

  15. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  16. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    Science.gov (United States)

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  17. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    Science.gov (United States)

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  18. Error rates in forensic DNA analysis: Definition, numbers, impact and communication

    NARCIS (Netherlands)

    Kloosterman, A.; Sjerps, M.; Quak, A.

    2014-01-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and

  19. A rate-jump method for characterization of soft tissues using nanoindentation techniques

    KAUST Repository

    Tang, Bin

    2012-01-01

    The biomechanical properties of soft tissues play an important role in their normal physiological and physical function, and may possibly relate to certain diseases. The advent of nanomechanical testing techniques, such as atomic force microscopy (AFM), nano-indentation and optical tweezers, enables the nano/micro-mechanical properties of soft tissues to be investigated, but in spite of the fact that biological tissues are highly viscoelastic, traditional elastic contact theory has been routinely used to analyze experimental data. In this article, a novel rate-jump protocol for treating viscoelasticity in nanomechanical data analysis is described. © 2012 The Royal Society of Chemistry.

  20. Modelling hard and soft states of Cygnus X-1 with propagating mass accretion rate fluctuations

    Science.gov (United States)

    Rapisarda, S.; Ingram, A.; van der Klis, M.

    2017-12-01

    We present a timing analysis of three Rossi X-ray Timing Explorer observations of the black hole binary Cygnus X-1 with the propagating mass accretion rate fluctuations model PROPFLUC. The model simultaneously predicts power spectra, time lags and coherence of the variability as a function of energy. The observations cover the soft and hard states of the source, and the transition between the two. We find good agreement between model predictions and data in the hard and soft states. Our analysis suggests that in the soft state the fluctuations propagate in an optically thin hot flow extending up to large radii above and below a stable optically thick disc. In the hard state, our results are consistent with a truncated disc geometry, where the hot flow extends radially inside the inner radius of the disc. In the transition from soft to hard state, the characteristics of the rapid variability are too complex to be successfully described with PROPFLUC. The surface density profile of the hot flow predicted by our model and the lack of quasi-periodic oscillations in the soft and hard states suggest that the spin of the black hole is aligned with the inner accretion disc and therefore probably with the rotational axis of the binary system.

  1. Classification based upon gene expression data: bias and precision of error rates.

    Science.gov (United States)

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  2. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.; Ghaeb, Jasim A.; Jazzar, Saleh; Saraereh, Omar A.

    2012-01-01

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate

  3. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    Science.gov (United States)

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  4. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  5. SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Yip, S; Rottmann, J; Berbeco, R [Brigham and Women' s Hospital, Boston, MA (United States)

    2014-06-01

    Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the

  6. SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking

    International Nuclear Information System (INIS)

    Yip, S; Rottmann, J; Berbeco, R

    2014-01-01

    Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the

  7. The study of error for analysis in dynamic image from the error of count rates in Nal (Tl) scintillation camera

    International Nuclear Information System (INIS)

    Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam; Park, Hoon Hee

    2013-01-01

    This study is aimed to evaluate the effect of T 1/2 upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9 9m TcO 4 - of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ 2 test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T 1/2 error from change of gradient with -0.25% to +0.25%, if T 1/2 is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T 1/2 error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation measurement. Especially, we found a

  8. A navigation system for percutaneous needle interventions based on PET/CT images: design, workflow and error analysis of soft tissue and bone punctures.

    Science.gov (United States)

    Oliveira-Santos, Thiago; Klaeser, Bernd; Weitzel, Thilo; Krause, Thomas; Nolte, Lutz-Peter; Peterhans, Matthias; Weber, Stefan

    2011-01-01

    Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well

  9. Safe and effective error rate monitors for SS7 signaling links

    Science.gov (United States)

    Schmidt, Douglas C.

    1994-04-01

    This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.

  10. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  11. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin; Genton, Marc G.

    2013-01-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  12. He flow rate measurements on the engineering model for the Astro-H Soft X-ray Spectrometer dewar

    Science.gov (United States)

    Mitsuishi, I.; Ezoe, Y.; Ishikawa, K.; Ohashi, T.; Fujimoto, R.; Mitsuda, K.; Tsunematsu, S.; Yoshida, S.; Kanao, K.; Murakami, M.; DiPirro, M.; Shirron, P.

    2014-11-01

    The sixth X-ray Japanese astronomy satellite, namely Astro-H, will be launched in 2015. The Soft X-ray Spectrometer onboard the Astro-H is a 6 × 6 X-ray microcalorimeter array and provides us with both a high energy resolution of 3 years, which consequently requires that the vapor flow rate out of the helium tank should be very small knife edge devices to retain the liquid helium under zero gravity and safely vent the small amount of the helium vapor. We measured helium mass flow rates from the helium tank equipped in the engineering model dewar. We tilted the dewar at an angle of 75° so that one side of the porous plug located at the top of the helium tank attaches the liquid helium and the porous plug separates the liquid and vapor helium by thermomechanical effect. Helium mass flow rates were measured at helium tank temperatures of 1.3, 1.5 and 1.9 K. We confirmed that resultant mass flow rates are in good agreement within the systematic error or low compared to component test results and achieve all the requirements. The film flow suppression also worked normally. Therefore, we concluded that the SXS helium vent system satisfactorily performs integrated into the dewar.

  13. Estimating the annotation error rate of curated GO database sequence annotations

    Directory of Open Access Journals (Sweden)

    Brown Alfred L

    2007-05-01

    Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.

  14. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    Science.gov (United States)

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  15. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa

  17. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    International Nuclear Information System (INIS)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-01-01

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  18. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei

    2014-06-01

    Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.

  19. Voice recognition versus transcriptionist: error rates and productivity in MRI reporting.

    Science.gov (United States)

    Strahan, Rodney H; Schneider-Kolsky, Michal E

    2010-10-01

    Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Fifty MRI reports generated by VR and 50 finalized MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Forty-two % and 30% of the finalized VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR. © 2010 The Authors. Journal of Medical Imaging and Radiation Oncology © 2010 The Royal Australian and New Zealand College of Radiologists.

  20. Voice recognition versus transcriptionist: error rated and productivity in MRI reporting

    International Nuclear Information System (INIS)

    Strahan, Rodney H.; Schneider-Kolsky, Michal E.

    2010-01-01

    Full text: Purpose: Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Methods: Fifty MRI reports generated by VR and 50 finalised MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Results: Forty-two % and 30% of the finalised VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Conclusion: Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR.

  1. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    International Nuclear Information System (INIS)

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-01-01

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization of the homodyne detection scheme.

  2. Analytical expression for the bit error rate of cascaded all-optical regenerators

    DEFF Research Database (Denmark)

    Mørk, Jesper; Öhman, Filip; Bischoff, S.

    2003-01-01

    We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed.......We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed....

  3. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    Science.gov (United States)

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  4. Error rates of a full-duplex system over EGK fading channels subject to laplacian interference

    KAUST Repository

    Soury, Hamza

    2017-07-31

    This paper develops a mathematical paradigm to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). Particularly, we study the dominant intra-cell interferer problem that appears between HD users scheduled on the same FD-channel. The distribution of the dominant interference is first characterized via its distribution function, which is derived in closed-form. Assuming Nakagami-m fading, the probability of error for different modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function of the signal-to-interference-ratio when compared to an idealized HD interference and noise free BS operation.

  5. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    Science.gov (United States)

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of

  6. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates

    Directory of Open Access Journals (Sweden)

    Berhane Yemane

    2008-03-01

    Full Text Available Abstract Background As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs. Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. Methods This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. Results The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. Conclusion The low sensitivity of parameter

  7. High dose rate brachytherapy for the treatment of soft tissue sarcoma of the extremity

    International Nuclear Information System (INIS)

    Speight, J.L.; Streeter, O.E.; Chawla, S.; Menendez, L.E.

    1996-01-01

    Purpose: we examined the role of preoperative neoadjuvant chemoradiation and adjuvant high-dose rate brachytherapy on the management of prognostically unfavorable soft tissue sarcomas of the extremities. Our goal was to examine the effect of high dose rate interstitial brachytherapy (HDR IBT) on reducing the risk of local recurrence following limb-sparing resection, as well as shortening treatment duration. Materials and methods: eleven patients, ranging in age from 31 to 73 years old, with soft tissue sarcoma of the extremity were treated at USC/Norris Comprehensive Cancer Center during 1994 and 1995. All patients had biopsy proven soft tissue sarcoma, and all were suitable candidates for limb-sparing surgery. All lesions were greater than 5cm in size and were primarily high grade. Tumor histologies included malignant fibrous histiocytoma (45%), liposarcoma (18%) and leiomyosarcoma, synovial cell sarcoma and spindle cell sarcoma (36%). Sites of tumor origin were the lower extremity (55%), upper extremity (18%) and buttock (9%), 1 patient (9%) had lesions in both the upper and lower extremity. Patients received HDR IBT following combined chemotherapy and external beam irradiation (EBRT) and en bloc resection of the sarcoma. Neoadjuvant chemotherapy consisted of three to four cycles of either Ifosfamide/Mesna with or without Adriamycin, or Mesna, Adriamycin, Ifosfamide and Dacarbazine. One patient received Cis-platin in addition to Ifos/Adr. A minimum of two cycles of chemotherapy were administered prior to EBRT. Additional cycles of chemotherapy were completed concurrently with EBRT but prior to HDR IBT. Preoperative EBRT doses ranging from 40 to 59.4 Gy were given in daily fractions of 180 to 200cGy. Following en bloc resection, HDR IBT was administered using the Omnitron tm 2000 remote afterloading system. Doses ranging from 13 to 30 Gy were delivered to the surgical tumor bed at depths of 0.5mm to 0.75mm from the radioactive source. Results: median follow-up was

  8. Kurzweil Reading Machine: A Partial Evaluation of Its Optical Character Recognition Error Rate.

    Science.gov (United States)

    Goodrich, Gregory L.; And Others

    1979-01-01

    A study designed to assess the ability of the Kurzweil reading machine (a speech reading device for the visually handicapped) to read three different type styles produced by five different means indicated that the machines tested had different error rates depending upon the means of producing the copy and upon the type style used. (Author/CL)

  9. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei; Ding, Xiaoli; Lu, Zhong; Jung, Hyungsup; Hu, Jun; Feng, Guangcai

    2014-01-01

    be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long

  10. Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2005-01-01

    Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...

  11. Error rates of a full-duplex system over EGK fading channels subject to laplacian interference

    KAUST Repository

    Soury, Hamza; Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function

  12. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Calculating Error Percentage in Using Water Phantom Instead of Soft Tissue Concerning 103Pd Brachytherapy Source Distribution via Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    OL Ahmadi

    2015-12-01

    Full Text Available Introduction: 103Pd is a low energy source, which is used in brachytherapy. According to the standards of American Association of Physicists in Medicine, dosimetric parameters determination of brachytherapy sources before the clinical application was considered significantly important. Therfore, the present study aimed to compare the dosimetric parameters of the target source using the water phantom and soft tissue. Methods: According to the TG-43U1 protocol, the dosimetric parameters were compared around the 103Pd source in regard with water phantom with the density of 0.998 gr/cm3 and the soft tissue with the density of 1.04 gr/cm3 on the longitudinal and transverse axes using the MCNP4C code and the relative differences were compared between the both conditions. Results: The simulation results indicated that the dosimetric parameters depended on the radial dose function and the anisotropy function in the application of the water phantom instead of soft tissue up to a distance of 1.5 cm,  between which a good consistency was observed. With increasing the distance, the difference increased, so as within 6 cm from the source, this difference increased to 4%. Conclusions: The results of  the soft tissue phantom compared with those of the water phantom indicated 4% relative difference at a distance of 6 cm from the source. Therefore, the results of the water phantom with a maximum error of 4% can be used in practical applications instead of soft tissue. Moreover, the amount of differences obtained in each distance regarding using the soft tissue phantom could be corrected.

  14. Competence in Streptococcus pneumoniae is regulated by the rate of ribosomal decoding errors.

    Science.gov (United States)

    Stevens, Kathleen E; Chang, Diana; Zwack, Erin E; Sebert, Michael E

    2011-01-01

    Competence for genetic transformation in Streptococcus pneumoniae develops in response to accumulation of a secreted peptide pheromone and was one of the initial examples of bacterial quorum sensing. Activation of this signaling system induces not only expression of the proteins required for transformation but also the production of cellular chaperones and proteases. We have shown here that activity of this pathway is sensitively responsive to changes in the accuracy of protein synthesis that are triggered by either mutations in ribosomal proteins or exposure to antibiotics. Increasing the error rate during ribosomal decoding promoted competence, while reducing the error rate below the baseline level repressed the development of both spontaneous and antibiotic-induced competence. This pattern of regulation was promoted by the bacterial HtrA serine protease. Analysis of strains with the htrA (S234A) catalytic site mutation showed that the proteolytic activity of HtrA selectively repressed competence when translational fidelity was high but not when accuracy was low. These findings redefine the pneumococcal competence pathway as a response to errors during protein synthesis. This response has the capacity to address the immediate challenge of misfolded proteins through production of chaperones and proteases and may also be able to address, through genetic exchange, upstream coding errors that cause intrinsic protein folding defects. The competence pathway may thereby represent a strategy for dealing with lesions that impair proper protein coding and for maintaining the coding integrity of the genome. The signaling pathway that governs competence in the human respiratory tract pathogen Streptococcus pneumoniae regulates both genetic transformation and the production of cellular chaperones and proteases. The current study shows that this pathway is sensitively controlled in response to changes in the accuracy of protein synthesis. Increasing the error rate during

  15. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  16. FPGA-based Bit-Error-Rate Tester for SEU-hardened Optical Links

    CERN Document Server

    Detraz, S; Moreira, P; Papadopoulos, S; Papakonstantinou, I; Seif El Nasr, S; Sigaud, C; Soos, C; Stejskal, P; Troska, J; Versmissen, H

    2009-01-01

    The next generation of optical links for future High-Energy Physics experiments will require components qualified for use in radiation-hard environments. To cope with radiation induced single-event upsets, the physical layer protocol will include Forward Error Correction (FEC). Bit-Error-Rate (BER) testing is a widely used method to characterize digital transmission systems. In order to measure the BER with and without the proposed FEC, simultaneously on several devices, a multi-channel BER tester has been developed. This paper describes the architecture of the tester, its implementation in a Xilinx Virtex-5 FPGA device and discusses the experimental results.

  17. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan; Alouini, Mohamed-Slim; Aissa, Sonia

    2012-01-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  18. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2012-05-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  19. Linear transceiver design for nonorthogonal amplify-and-forward protocol using a bit error rate criterion

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2014-04-01

    The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol. © 2002-2012 IEEE.

  20. The assessment of cognitive errors using an observer-rated method.

    Science.gov (United States)

    Drapeau, Martin

    2014-01-01

    Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.

  1. State sales tax rates for soft drinks and snacks sold through grocery stores and vending machines, 2007.

    Science.gov (United States)

    Chriqui, Jamie F; Eidson, Shelby S; Bates, Hannalori; Kowalczyk, Shelly; Chaloupka, Frank J

    2008-07-01

    Junk food consumption is associated with rising obesity rates in the United States. While a "junk food" specific tax is a potential public health intervention, a majority of states already impose sales taxes on certain junk food and soft drinks. This study reviews the state sales tax variance for soft drinks and selected snack products sold through grocery stores and vending machines as of January 2007. Sales taxes vary by state, intended retail location (grocery store vs. vending machine), and product. Vended snacks and soft drinks are taxed at a higher rate than grocery items and other food products, generally, indicative of a "disfavored" tax status attributed to vended items. Soft drinks, candy, and gum are taxed at higher rates than are other items examined. Similar tax schemes in other countries and the potential implications of these findings relative to the relationship between price and consumption are discussed.

  2. Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.

    Science.gov (United States)

    Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda

    2017-09-01

    We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.

  3. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  4. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Kugelman

    Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.

  5. Error-Rate Bounds for Coded PPM on a Poisson Channel

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  6. Symbol error rate performance evaluation of the LM37 multimegabit telemetry modulator-demodulator unit

    Science.gov (United States)

    Malek, H.

    1981-01-01

    The LM37 multimegabit telemetry modulator-demodulator unit was tested for evaluation of its symbol error rate (SER) performance. Using an automated test setup, the SER tests were carried out at various symbol rates and signal-to-noise ratios (SNR), ranging from +10 to -10 dB. With the aid of a specially designed error detector and a stabilized signal and noise summation unit, measurement of the SER at low SNR was possible. The results of the tests show that at symbol rates below 20 megasymbols per second (MS)s) and input SNR above -6 dB, the SER performance of the modem is within the specified 0.65 to 1.5 dB of the theoretical error curve. At symbol rates above 20 MS/s, the specification is met at SNR's down to -2 dB. The results of the SER tests are presented with the description of the test setup and the measurement procedure.

  7. Effect of antiseptic irrigation on infection rates of traumatic soft tissue wounds: a longitudinal cohort study.

    Science.gov (United States)

    Roth, B; Neuenschwander, R; Brill, F; Wurmitzer, F; Wegner, C; Assadian, O; Kramer, A

    2017-03-02

    Acute traumatic wounds are contaminated with bacteria and therefore an infection risk. Antiseptic wound irrigation before surgical intervention is routinely performed for contaminated wounds. However, a broad variety of different irrigation solutions are in use. The aim of this retrospective, non-randomised, controlled longitudinal cohort study was to assess the preventive effect of four different irrigation solutions before surgical treatment, on wound infection in traumatic soft tissue wounds. Over a period of three decades, the prophylactic application of wound irrigation was studied in patients with contaminated traumatic wounds requiring surgical treatment, with or without primary wound closure. The main outcome measure was development of wound infection. From 1974-1983, either 0.04 % polihexanide (PHMB), 1 % povidone-iodine (PVP-I), 4 % hydrogen peroxide, or undiluted Ringer's solution were concurrently in use. From 1984-1996, only 0.04 % PHMB or 1 % PVP-I were applied. From 1997, 0.04 % PHMB was used until the end of the study period in 2005. The combined rate for superficial and deep wound infection was 1.7 % in the 0.04 % PHMB group (n=3264), 4.8 % in the 1 % PVP-I group (n=2552), 5.9 % in the Ringer's group (n=645), and 11.7 % in the 4 % hydrogen peroxide group (n=643). Compared with all other treatment arms, PHMB showed the highest efficacy in preventing infection in traumatic soft tissue wounds (p<0.001). However, compared with PVP-I, the difference was only significant for superficial infections. The large patient numbers in this study demonstrated a robust superiority of 0.04 % PHMB to prevent infection in traumatic soft tissue wounds. These retrospective results may further provide important information as the basis for power calculations for the urgently needed prospective clinical trials in the evolving field of wound antisepsis.

  8. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.

    2012-12-06

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

  9. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  10. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    Science.gov (United States)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  11. Soft Decision Analyzer

    Science.gov (United States)

    Lansdowne, Chatwin; Steele, Glen; Zucha, Joan; Schlesinger, Adam

    2013-01-01

    We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.

  12. The type I error rate for in vivo Comet assay data when the hierarchical structure is disregarded

    DEFF Research Database (Denmark)

    Hansen, Merete Kjær; Kulahci, Murat

    the type I error rate is greater than the nominal _ at 0.05. Closed-form expressions based on scaled F-distributions using the Welch-Satterthwaite approximation are provided to show how the type I error rate is aUected. With this study we hope to motivate researchers to be more precise regarding......, and this imposes considerable impact on the type I error rate. This study aims to demonstrate the implications that result from disregarding the hierarchical structure. DiUerent combinations of the factor levels as they appear in a literature study give type I error rates up to 0.51 and for all combinations...

  13. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  14. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  15. Comparing Response Times and Error Rates in a Simultaneous Masking Paradigm

    Directory of Open Access Journals (Sweden)

    F Hermens

    2014-08-01

    Full Text Available In simultaneous masking, performance on a foveally presented target is impaired by one or more flanking elements. Previous studies have demonstrated strong effects of the grouping of the target and the flankers on the strength of masking (e.g., Malania, Herzog & Westheimer, 2007. These studies have predominantly examined performance by measuring offset discrimination thresholds as a measure of performance, and it is therefore unclear whether other measures of performance provide similar outcomes. A recent study, which examined the role of grouping on error rates and response times in a speeded vernier offset discrimination task, similar to that used by Malania et al. (2007, suggested a possible dissociation between the two measures, with error rates mimicking threshold performance, but response times showing differential results (Panis & Hermens, 2014. We here report the outcomes of three experiments examining this possible dissociation, and demonstrate an overall similar pattern of results for error rates and response times across a broad range of mask layouts. Moreover, the pattern of results in our experiments strongly correlates with threshold performance reported earlier (Malania et al., 2007. Our results suggest that outcomes in a simultaneous masking paradigm do not critically depend on the outcome measure used, and therefore provide evidence for a common underlying mechanism.

  16. Tax revenue and inflation rate predictions in Banda Aceh using Vector Error Correction Model (VECM)

    Science.gov (United States)

    Maulia, Eva; Miftahuddin; Sofyan, Hizir

    2018-05-01

    A country has some important parameters to achieve the welfare of the economy, such as tax revenues and inflation. One of the largest revenues of the state budget in Indonesia comes from the tax sector. Besides, the rate of inflation occurring in a country can be used as one measure, to measure economic problems that the country facing. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the relationship and forecasting tax revenue and inflation rate. VECM (Vector Error Correction Model) was chosen as the method used in this research, because of the data used in the form of multivariate time series data. This study aims to produce a VECM model with optimal lag and to predict the tax revenue and inflation rate of the VECM model. The results show that the best model for data of tax revenue and the inflation rate in Banda Aceh City is VECM with 3rd optimal lag or VECM (3). Of the seven models formed, there is a significant model that is the acceptance model of income tax. The predicted results of tax revenue and the inflation rate in Kota Banda Aceh for the next 6, 12 and 24 periods (months) obtained using VECM (3) are considered valid, since they have a minimum error value compared to other models.

  17. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    Science.gov (United States)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  18. PS-022 Complex automated medication systems reduce medication administration error rates in an acute medical ward

    DEFF Research Database (Denmark)

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2017-01-01

    Background Medication errors have received extensive attention in recent decades and are of significant concern to healthcare organisations globally. Medication errors occur frequently, and adverse events associated with medications are one of the largest causes of harm to hospitalised patients...... cabinet, automated dispensing and barcode medication administration; (2) non-patient specific automated dispensing and barcode medication administration. The occurrence of administration errors was observed in three 3 week periods. The error rates were calculated by dividing the number of doses with one...

  19. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  20. Error rate of automated calculation for wound surface area using a digital photography.

    Science.gov (United States)

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    Science.gov (United States)

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  2. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  3. On the symmetric α-stable distribution with application to symbol error rate calculations

    KAUST Repository

    Soury, Hamza

    2016-12-24

    The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.

  4. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2011-06-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  5. High strain rate characterization of soft materials: past, present and possible futures

    Science.gov (United States)

    Siviour, Clive

    2015-06-01

    The high strain rate properties of low impedance materials have long been of interest to the community: the very first paper by Kolsky on his eponymous bars included data from man-made polymers and natural rubber. However, it has also long been recognized that characterizing soft or low impedance specimens under dynamic loading presents a number of challenges, mainly owing to the low sound speed in, and low stresses supported by, these materials. Over the past 20 years, significant progress has been made in high rate testing techniques, including better experimental design, more sensitive data acquisition and better understanding of specimen behavior. Further, a new generation of techniques, in which materials are characterized using travelling waves, rather than in a state of static equilibrium, promise to turn those properties that were previously a drawback into an advantage. This paper will give an overview of the history of high rate characterization, the current state of the art after an exciting couple of decades and some of the techniques currently being developed that have the potential to offer increased quality data in the future.

  6. Influence of Cooling on the Glycolysis Rate and Development of PSE (Pale, Soft, Exudative Meat

    Directory of Open Access Journals (Sweden)

    Mayka Reghiany Pedrão

    2015-04-01

    Full Text Available The aim of this work was to evaluate pH values fall rate in chicken breast meat under commercial refrigeration processing conditions and the development of PSE (pale, soft, exudative meat. Broiler breast samples from the Cobb breed, both genders, at 47 days of age (n = 100 were taken from refrigerated carcasses (RS immersed in water and ice in a tank chilled at 0°C (±2. pH and temperature (T values were recorded at several periods throughout refrigeration in comparison to samples left at room T as control (CS. The ultimate pH (pHu value of 5.86 for RS carcasses were only reached at 11°C after 8.35 h post mortem (PM while, for CS samples, pHu value was 5.94 at 22°C after 4.08 h PM. Thus, under commercial refrigeration conditions, the glycolysis rate was retarded by over 4.0 h PM and the breast meat color was affected. At 24.02 h PM, PSE meat incidence was 30% while for CS, meat remained dark and PSE meat was not detected. Results show retardation in the glycolysis rate and PSE meat development was promoted by the refrigeration treatment when compared with samples stored at processing room temperature.

  7. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    Science.gov (United States)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  8. Perioperative fractionated high-dose rate brachytherapy for malignant bone and soft tissue tumors

    International Nuclear Information System (INIS)

    Koizumi, Masahiko; Inoue, Takehiro; Yamazaki, Hideya; Teshima, Teruki; Tanaka, Eiichi; Yoshida, Ken; Imai, Atsushi; Shiomi, Hiroya; Kagawa, Kazufumi; Araki, Nobuto; Kuratsu, Shigeyuki; Uchida, Atsumasa; Inoue, Toshihiko

    1999-01-01

    Purpose: To investigate the viability of perioperative fractionated HDR brachytherapy for malignant bone and soft tissue tumors, analyzing the influence of surgical margin. Methods and Materials: From July 1992 through May 1996, 16 lesions of 14 patients with malignant bone and soft tissue tumors (3 liposarcomas, 3 MFHs, 2 malignant schwannomas, 2 chordomas, 1 osteosarcoma, 1 leiomyosarcoma, 1 epithelioid sarcoma, and 1 synovial sarcoma) were treated at the Osaka University Hospital. The patients' ages ranged from 14 to 72 years (median: 39 years). Treatment sites were the pelvis in 6 lesions, the upper limbs in 5, the neck in 4, and a lower limb in 1. The resection margins were classified as intracapsular in 5 lesions, marginal in 5, and wide in 6. Postoperative fractionated HDR brachytherapy was started on the 4th-13th day after surgery (median: 6th day). The total dose was 40-50 Gy/7-10 fr/ 4-7 day (bid) at 5 or 10 mm from the source. Follow-up periods were between 19 and 46 months (median: 30 months). Results: Local control rates were 75% at 1 year and 48% in 2 years, and ultimate local control was achieved in 8 (50%) of 16 lesions. Of the 8 uncontrolled lesions, 5 (63%) had intracapsular (macroscopically positive) resection margins, and all the 8 controlled lesions (100%) had marginal (microscopically positive) or wide (negative) margins. Of the total, 3 patients died of both tumor and metastasis, 3 of metastasis alone, 1 of tumor alone, and 7 showed no evidence of disease. Peripheral nerve palsy was seen in one case after this procedure, but no infection or delayed wound healing caused by tubing or irradiation has occurred. Conclusion: Perioperative fractionated HDR brachytherapy is safe, well tolerated, and applicable to marginal or wide surgical margin cases

  9. Errors of car wheels rotation rate measurement using roller follower on test benches

    Science.gov (United States)

    Potapov, A. S.; Svirbutovich, O. A.; Krivtsov, S. N.

    2018-03-01

    The article deals with rotation rate measurement errors, which depend on the motor vehicle rate, on the roller, test benches. Monitoring of the vehicle performance under operating conditions is performed on roller test benches. Roller test benches are not flawless. They have some drawbacks affecting the accuracy of vehicle performance monitoring. Increase in basic velocity of the vehicle requires increase in accuracy of wheel rotation rate monitoring. It determines the degree of accuracy of mode identification for a wheel of the tested vehicle. To ensure measurement accuracy for rotation velocity of rollers is not an issue. The problem arises when measuring rotation velocity of a car wheel. The higher the rotation velocity of the wheel is, the lower the accuracy of measurement is. At present, wheel rotation frequency monitoring on roller test benches is carried out by following-up systems. Their sensors are rollers following wheel rotation. The rollers of the system are not kinematically linked to supporting rollers of the test bench. The roller follower is forced against the wheels of the tested vehicle by means of a spring-lever mechanism. Experience of the test bench equipment operation has shown that measurement accuracy is satisfactory at small rates of vehicles diagnosed on roller test benches. With a rising diagnostics rate, rotation velocity measurement errors occur in both braking and pulling modes because a roller spins about a tire tread. The paper shows oscillograms of changes in wheel rotation velocity and rotation velocity measurement system’s signals when testing a vehicle on roller test benches at specified rates.

  10. Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise

    KAUST Repository

    Souri, Hamza; Alouini, Mohamed-Slim

    2015-01-01

    The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.

  11. Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise

    KAUST Repository

    Souri, Hamza

    2015-06-01

    The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.

  12. Error rates and resource overheads of encoded three-qubit gates

    Science.gov (United States)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    2017-10-01

    A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.

  13. Comparison of Bit Error Rate of Line Codes in NG-PON2

    Directory of Open Access Journals (Sweden)

    Tomas Horvath

    2016-05-01

    Full Text Available This article focuses on simulation and comparison of line codes NRZ (Non Return to Zero, RZ (Return to Zero and Miller’s code for NG-PON2 (Next-Generation Passive Optical Network Stage 2 using. Our article provides solutions with Q-factor, BER (Bit Error Rate, and bandwidth comparison. Line codes are the most important part of communication over the optical fibre. The main role of these codes is digital signal representation. NG-PON2 networks use optical fibres for communication that is the reason why OptSim v5.2 is used for simulation.

  14. Inclusive bit error rate analysis for coherent optical code-division multiple-access system

    Science.gov (United States)

    Katz, Gilad; Sadot, Dan

    2002-06-01

    Inclusive noise and bit error rate (BER) analysis for optical code-division multiplexing (OCDM) using coherence techniques is presented. The analysis contains crosstalk calculation of the mutual field variance for different number of users. It is shown that the crosstalk noise depends deeply on the receiver integration time, the laser coherence time, and the number of users. In addition, analytical results of the power fluctuation at the received channel due to the data modulation at the rejected channels are presented. The analysis also includes amplified spontaneous emission (ASE)-related noise effects of in-line amplifiers in a long-distance communication link.

  15. Exploring Predictability of Instructor Ratings Using a Quantitative Tool for Evaluating Soft Skills among MBA Students

    Science.gov (United States)

    Brill, Robert T.; Gilfoil, David M.; Doll, Kristen

    2014-01-01

    Academic researchers have often touted the growing importance of "soft skills" for modern day business leaders, especially leadership and communication skills. Despite this growing interest and attention, relatively little work has been done to develop and validate tools to assess soft skills. Forty graduate students from nine MBA…

  16. Symbol and Bit Error Rates Analysis of Hybrid PIM-CDMA

    Directory of Open Access Journals (Sweden)

    Ghassemlooy Z

    2005-01-01

    Full Text Available A hybrid pulse interval modulation code-division multiple-access (hPIM-CDMA scheme employing the strict optical orthogonal code (SOCC with unity and auto- and cross-correlation constraints for indoor optical wireless communications is proposed. In this paper, we analyse the symbol error rate (SER and bit error rate (BER of hPIM-CDMA. In the analysis, we consider multiple access interference (MAI, self-interference, and the hybrid nature of the hPIM-CDMA signal detection, which is based on the matched filter (MF. It is shown that the BER/SER performance can only be evaluated if the bit resolution conforms to the condition set by the number of consecutive false alarm pulses that might occur and be detected, so that one symbol being divided into two is unlikely to occur. Otherwise, the probability of SER and BER becomes extremely high and indeterminable. We show that for a large number of users, the BER improves when increasing the code weight . The results presented are compared with other modulation schemes.

  17. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    Science.gov (United States)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  18. Postoperative radiation boost does not improve local recurrence rates in extremity soft tissue sarcomas

    International Nuclear Information System (INIS)

    Alamanda, Vignesh K.; Schwartz, Herbert S.; Holt, Ginger E.; Song, Yanna; Shinohara, Eric

    2014-01-01

    The standard of care for extremity soft tissue sarcomas continues to be negative-margin limb salvage surgery. Radiotherapy is frequently used as an adjunct to decrease local recurrence. No differences in survival have been found between preoperative and postoperative radiotherapy regimens. However, it is uncertain if the use of a postoperative boost in addition to preoperative radiotherapy reduces local recurrence rates. This retrospective review evaluated patients who received preoperative radiotherapy (n = 49) and patients who received preoperative radiotherapy with a postoperative boost (n=45). The primary endpoint analysed was local recurrence, with distant metastasis and death due to sarcoma analysed as secondary endpoints. Wilcoxon rank-sum test and either χ 2 or Fisher's exact test were used to compare variables. Multivariable regression analyses were used to take into account potential confounders and identify variables that affected outcomes. No differences in the proportion or rate of local recurrence, distant metastasis or death due to sarcoma were observed between the two groups (P>0.05). The two groups were similarly matched with respect to demographics such as age, race and sex and tumour characteristics including excision status, tumour site, size, depth, grade, American Joint Committee on Cancer stage, chemotherapy receipt and histological subtype (P>0.05). The postoperative boost group had a larger proportion of patients with positive microscopic margins (62% vs 10%; P<0.001). No differences in rates of local recurrence, distant metastasis or death due to sarcoma were found in patients who received both pre- and postoperative radiotherapy when compared with those who received only preoperative radiotherapy.

  19. A HIGH REPETITION RATE VUV-SOFT X-RAY FEL CONCEPT

    International Nuclear Information System (INIS)

    Corlett, J.; Byrd, J.; Fawley, W.M.; Gullans, M.; Li, D.; Lidia, S.M.; Padmore, H.; Penn, G.; Pogorelov, I.; Qiang, J.; Robin, D.; Sannibale, F.; Staples, J.W.; Steier, C.; Venturini, M.; Virostek, S.; Wan, W.; Wells, R.; Wilcox, R.; Wurtele, J.; Zholents, A.

    2007-01-01

    We report on design studies for a seeded FEL light source that is responsive to the scientific needs of the future. The FEL process increases radiation flux by several orders of magnitude above existing incoherent sources, and offers the additional enhancements attainable by optical manipulations of the electron beam: control of the temporal duration and bandwidth of the coherent output, reduced gain length in the FEL, utilization of harmonics to attain shorter wavelengths, and precise synchronization of the x-ray pulse with seed laser systems. We describe an FEL facility concept based on a high repetition rate RF photocathode gun, that would allow simultaneous operation of multiple independent FEL's, each producing high average brightness, tunable over the VUV-soft x-ray range, and each with individual performance characteristics determined by the configuration of the FEL. SASE, enhanced-SASE (ESASE), seeded, harmonic generation, and other configurations making use of optical manipulations of the electron beam may be employed, providing a wide range of photon beam properties to meet varied user demands

  20. A soft X-ray source based on a low divergence, high repetition rate ultraviolet laser

    Science.gov (United States)

    Crawford, E. A.; Hoffman, A. L.; Milroy, R. D.; Quimby, D. C.; Albrecht, G. F.

    The CORK code is utilized to evaluate the applicability of low divergence ultraviolet lasers for efficient production of soft X-rays. The use of the axial hydrodynamic code wih one ozone radial expansion to estimate radial motion and laser energy is examined. The calculation of ionization levels of the plasma and radiation rates by employing the atomic physics and radiation model included in the CORK code is described. Computations using the hydrodynamic code to determine the effect of laser intensity, spot size, and wavelength on plasma electron temperature are provided. The X-ray conversion efficiencies of the lasers are analyzed. It is observed that for a 1 GW laser power the X-ray conversion efficiency is a function of spot size, only weakly dependent on pulse length for time scales exceeding 100 psec, and better conversion efficiencies are obtained at shorter wavelengths. It is concluded that these small lasers focused to 30 micron spot sizes and 10 to the 14th W/sq cm intensities are useful sources of 1-2 keV radiation.

  1. Minimizing the symbol-error-rate for amplify-and-forward relaying systems using evolutionary algorithms

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2015-02-01

    In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.

  2. Correct mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme on ping-pong protocol

    OpenAIRE

    Zhang, Zhanjun

    2004-01-01

    Comment: The wrong mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme [PRL90(03)157901]on ping-pong protocol have been pointed out and corrected

  3. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  4. Personnel selection and emotional stability certification: establishing a false negative error rate when clinical interviews

    International Nuclear Information System (INIS)

    Berghausen, P.E. Jr.

    1987-01-01

    The security plans of nuclear plants generally require that all personnel who are to have unescorted access to protected areas or vital islands be screened for emotional instability. Screening typically consists of first administering the MMPI and then conducting a clinical interview. Interviews-by-exception protocols provide for only those employees to be interviewed who have some indications of psychopathology in their MMPI results. A problem arises when the indications are not readily apparent: False negatives are likely to occur, resulting in employees being erroneously granted unescorted access. The present paper describes the development of a predictive equation which permits accurate identification, via analysis of MMPI results, of those employees who are most in need of being interviewed. The predictive equation also permits knowing probably maximum false negative error rates when a given percentage of employees is interviewed

  5. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2010-10-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  6. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob; Uysal, Murat; Tsiftsis, Theodoros A.

    2014-01-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  7. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami- Fading Channels

    Directory of Open Access Journals (Sweden)

    Li Zexian

    2004-01-01

    Full Text Available Multicarrier code division multiple access (MC-CDMA is a promising technique that combines orthogonal frequency division multiplexing (OFDM with CDMA. In this paper, based on an alternative expression for the -function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER of multiuser MC-CDMA systems in frequency-selective Nakagami- fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC or equal gain combining (EGC. The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.

  8. Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates

    International Nuclear Information System (INIS)

    Zamanali, J.H.; Hubbard, F.R.; Mosleh, A.; Waller, M.A.

    1992-01-01

    The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition

  9. Analysis of family-wise error rates in statistical parametric mapping using random field theory.

    Science.gov (United States)

    Flandin, Guillaume; Friston, Karl J

    2017-11-01

    This technical report revisits the analysis of family-wise error rates in statistical parametric mapping-using random field theory-reported in (Eklund et al. []: arXiv 1511.01863). Contrary to the understandable spin that these sorts of analyses attract, a review of their results suggests that they endorse the use of parametric assumptions-and random field theory-in the analysis of functional neuroimaging data. We briefly rehearse the advantages parametric analyses offer over nonparametric alternatives and then unpack the implications of (Eklund et al. []: arXiv 1511.01863) for parametric procedures. Hum Brain Mapp, 2017. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  10. Performance analysis for the bit-error rate of SAC-OCDMA systems

    Science.gov (United States)

    Feng, Gang; Cheng, Wenqing; Chen, Fujun

    2015-09-01

    Under low power, Gaussian statistics by invoking the central limit theorem is feasible to predict the upper bound in the spectral-amplitude-coding optical code division multiple access (SAC-OCDMA) system. However, this case severely underestimates the bit-error rate (BER) performance of the system under high power assumption. Fortunately, the exact negative binomial (NB) model is a perfect replacement for the Gaussian model in the prediction and evaluation. Based on NB statistics, a more accurate closed-form expression is analyzed and derived for the SAC-OCDMA system. The experiment shows that the obtained expression provides a more precise prediction of the BER performance under the low and high power assumptions.

  11. System care improves trauma outcome: patient care errors dominate reduced preventable death rate.

    Science.gov (United States)

    Thoburn, E; Norris, P; Flores, R; Goode, S; Rodriguez, E; Adams, V; Campbell, S; Albrink, M; Rosemurgy, A

    1993-01-01

    A review of 452 trauma deaths in Hillsborough County, Florida, in 1984 documented that 23% of non-CNS trauma deaths were preventable and occurred because of inadequate resuscitation or delay in proper surgical care. In late 1988 Hillsborough County organized a County Trauma Agency (HCTA) to coordinate trauma care among prehospital providers and state-designated trauma centers. The purpose of this study was to review county trauma deaths after the inception of the HCTA to determine the frequency of preventable deaths. 504 trauma deaths occurring between October 1989 and April 1991 were reviewed. Through committee review, 10 deaths were deemed preventable; 2 occurred outside the trauma system. Of the 10 deaths, 5 preventable deaths occurred late in severely injured patients. The preventable death rate has decreased to 7.0% with system care. The causes of preventable deaths have changed from delayed or inadequate intervention to postoperative care errors.

  12. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-06-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  13. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, D; Ehler, E [University of Minnesota, Minneapolis, MN (United States)

    2015-06-15

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing.

  14. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    International Nuclear Information System (INIS)

    Sterling, D; Ehler, E

    2015-01-01

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing

  15. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    Science.gov (United States)

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ

  16. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    Science.gov (United States)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  17. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System

    Directory of Open Access Journals (Sweden)

    Jiayu Zhang

    2018-05-01

    Full Text Available The Semi-Strapdown Inertial Navigation System (SSINS provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS inertial measurement unit (MIMU outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.

  18. Quantitative comparison of errors in 15N transverse relaxation rates measured using various CPMG phasing schemes

    International Nuclear Information System (INIS)

    Myint Wazo; Cai Yufeng; Schiffer, Celia A.; Ishima, Rieko

    2012-01-01

    Nitrogen-15 Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation experiment are widely used to characterize protein backbone dynamics and chemical exchange parameters. Although an accurate value of the transverse relaxation rate, R 2 , is needed for accurate characterization of dynamics, the uncertainty in the R 2 value depends on the experimental settings and the details of the data analysis itself. Here, we present an analysis of the impact of CPMG pulse phase alternation on the accuracy of the 15 N CPMG R 2 . Our simulations show that R 2 can be obtained accurately for a relatively wide spectral width, either using the conventional phase cycle or using phase alternation when the r.f. pulse power is accurately calibrated. However, when the r.f. pulse is miscalibrated, the conventional CPMG experiment exhibits more significant uncertainties in R 2 caused by the off-resonance effect than does the phase alternation experiment. Our experiments show that this effect becomes manifest under the circumstance that the systematic error exceeds that arising from experimental noise. Furthermore, our results provide the means to estimate practical parameter settings that yield accurate values of 15 N transverse relaxation rates in the both CPMG experiments.

  19. Power penalties for multi-level PAM modulation formats at arbitrary bit error rates

    Science.gov (United States)

    Kaliteevskiy, Nikolay A.; Wood, William A.; Downie, John D.; Hurley, Jason; Sterlingov, Petr

    2016-03-01

    There is considerable interest in combining multi-level pulsed amplitude modulation formats (PAM-L) and forward error correction (FEC) in next-generation, short-range optical communications links for increased capacity. In this paper we derive new formulas for the optical power penalties due to modulation format complexity relative to PAM-2 and due to inter-symbol interference (ISI). We show that these penalties depend on the required system bit-error rate (BER) and that the conventional formulas overestimate link penalties. Our corrections to the standard formulas are very small at conventional BER levels (typically 1×10-12) but become significant at the higher BER levels enabled by FEC technology, especially for signal distortions due to ISI. The standard formula for format complexity, P = 10log(L-1), is shown to overestimate the actual penalty for PAM-4 and PAM-8 by approximately 0.1 and 0.25 dB respectively at 1×10-3 BER. Then we extend the well-known PAM-2 ISI penalty estimation formula from the IEEE 802.3 standard 10G link modeling spreadsheet to the large BER case and generalize it for arbitrary PAM-L formats. To demonstrate and verify the BER dependence of the ISI penalty, a set of PAM-2 experiments and Monte-Carlo modeling simulations are reported. The experimental results and simulations confirm that the conventional formulas can significantly overestimate ISI penalties at relatively high BER levels. In the experiments, overestimates up to 2 dB are observed at 1×10-3 BER.

  20. PERBANDINGAN BIT ERROR RATE KODE REED-SOLOMON DENGAN KODE BOSE-CHAUDHURI-HOCQUENGHEM MENGGUNAKAN MODULASI 32-FSK

    Directory of Open Access Journals (Sweden)

    Eva Yovita Dwi Utami

    2016-11-01

    Full Text Available Kode Reed-Solomon (RS dan kode Bose-Chaudhuri-Hocquenghem (BCH merupakan kode pengoreksi error yang termasuk dalam jenis kode blok siklis. Kode pengoreksi error diperlukan pada sistem komunikasi untuk memperkecil error pada informasi yang dikirimkan. Dalam makalah ini, disajikan hasil penelitian kinerja BER sistem komunikasi yang menggunakan kode RS, kode BCH, dan sistem yang tidak menggunakan kode RS dan kode BCH, menggunakan modulasi 32-FSK pada kanal Additive White Gaussian Noise (AWGN, Rayleigh dan Rician. Kemampuan memperkecil error diukur menggunakan nilai Bit Error Rate (BER yang dihasilkan. Hasil penelitian menunjukkan bahwa kode RS seiring dengan penambahan nilai SNR, menurunkan nilai BER yang lebih curam bila dibandingkan sistem dengan kode BCH. Sedangkan kode BCH memberikan keunggulan saat SNR bernilai kecil, memiliki BER lebih baik daripada sistem dengan kode RS.

  1. Assessment of the rate and etiology of pharmacological errors by nurses of two major teaching hospitals in Shiraz

    Directory of Open Access Journals (Sweden)

    Fatemeh Vizeshfar

    2015-06-01

    Full Text Available Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic information, self report faults, etiology of medication error and researcher observations. The results showed that nurses’ faults in pediatric wards were 51/6% and in adults wards were 47/4%. The most common faults in adults wards were later or sooner drug administration (48/6%, and administration of drugs without prescription and administering wrong drugs were the most common medication errors in pediatric wards (each one 49/2%. According to researchers’ observations, the medication error rate of 57/9% was rated low in adults wards and the rate of 69/4% in pediatric wards was rated moderate. The most frequent medication errors in both adults and pediatric wards were that nurses didn’t explain the reason and type of drug they were going to administer to patients. Independent T-test showed a significant change in faults observations in pediatric wards (p=0.000 and in adults wards (p=0.000. Several studies have shown medication errors all over the world, especially in pediatric wards. However, by designing a suitable report system and use a multi disciplinary approach, we can be reduced the occurrence of medication errors and its negative consequences.

  2. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    Science.gov (United States)

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  3. Data-driven soft sensor design with multiple-rate sampled data: a comparative study

    DEFF Research Database (Denmark)

    Lin, Bao; Recke, Bodil; Schmidt, Torben M.

    2009-01-01

    to design quality soft sensors for cement kiln processes using data collected from a simulator and a plant log system. Preliminary results reveal that the WPLS approach is able to provide accurate one-step-ahead prediction. The regularized data lifting technique predicts the product quality of cement kiln...

  4. Impact of catheter reconstruction error on dose distribution in high dose rate intracavitary brachytherapy and evaluation of OAR doses

    International Nuclear Information System (INIS)

    Thaper, Deepak; Shukla, Arvind; Rathore, Narendra; Oinam, Arun S.

    2016-01-01

    In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this study is to evaluate the impact of catheter reconstruction error on dose distribution in CT based intracavitary brachytherapy planning and evaluation of its effect on organ at risk (OAR) like bladder, rectum and sigmoid and target volume High risk clinical target volume (HR-CTV)

  5. Time Domain Equalizer Design Using Bit Error Rate Minimization for UWB Systems

    Directory of Open Access Journals (Sweden)

    Syed Imtiaz Husain

    2009-01-01

    Full Text Available Ultra-wideband (UWB communication systems occupy huge bandwidths with very low power spectral densities. This feature makes the UWB channels highly rich in resolvable multipaths. To exploit the temporal diversity, the receiver is commonly implemented through a Rake. The aim to capture enough signal energy to maintain an acceptable output signal-to-noise ratio (SNR dictates a very complicated Rake structure with a large number of fingers. Channel shortening or time domain equalizer (TEQ can simplify the Rake receiver design by reducing the number of significant taps in the effective channel. In this paper, we first derive the bit error rate (BER of a multiuser and multipath UWB system in the presence of a TEQ at the receiver front end. This BER is then written in a form suitable for traditional optimization. We then present a TEQ design which minimizes the BER of the system to perform efficient channel shortening. The performance of the proposed algorithm is compared with some generic TEQ designs and other Rake structures in UWB channels. It is shown that the proposed algorithm maintains a lower BER along with efficiently shortening the channel.

  6. Student laboratory experiments exploring optical fibre communication systems, eye diagrams, and bit error rates

    Science.gov (United States)

    Walsh, Douglas; Moodie, David; Mauchline, Iain; Conner, Steve; Johnstone, Walter; Culshaw, Brian

    2005-06-01

    Optical fibre communications has proved to be one of the key application areas, which created, and ultimately propelled the global growth of the photonics industry over the last twenty years. Consequently the teaching of the principles of optical fibre communications has become integral to many university courses covering photonics technology. However to reinforce the fundamental principles and key technical issues students examine in their lecture courses and to develop their experimental skills, it is critical that the students also obtain hands-on practical experience of photonics components, instruments and systems in an associated teaching laboratory. In recognition of this need OptoSci, in collaboration with university academics, commercially developed a fibre optic communications based educational package (ED-COM). This educator kit enables students to; investigate the characteristics of the individual communications system components (sources, transmitters, fibre, receiver), examine and interpret the overall system performance limitations imposed by attenuation and dispersion, conduct system design and performance analysis. To further enhance the experimental programme examined in the fibre optic communications kit, an extension module to ED-COM has recently been introduced examining one of the most significant performance parameters of digital communications systems, the bit error rate (BER). This add-on module, BER(COM), enables students to generate, evaluate and investigate signal quality trends by examining eye patterns, and explore the bit-rate limitations imposed on communication systems by noise, attenuation and dispersion. This paper will examine the educational objectives, background theory, and typical results for these educator kits, with particular emphasis on BER(COM).

  7. Non preemptive soft real time scheduler: High deadline meeting rate on overload

    Science.gov (United States)

    Khalib, Zahereel Ishwar Abdul; Ahmad, R. Badlishah; El-Shaikh, Mohamed

    2015-05-01

    While preemptive scheduling has gain more attention among researchers, current work in non preemptive scheduling had shown promising result in soft real time jobs scheduling. In this paper we present a non preemptive scheduling algorithm meant for soft real time applications, which is capable of producing better performance during overload while maintaining excellent performance during normal load. The approach taken by this algorithm has shown more promising results compared to other algorithms including its immediate predecessor. We will present the analysis made prior to inception of the algorithm as well as simulation results comparing our algorithm named gutEDF with EDF and gEDF. We are convinced that grouping jobs utilizing pure dynamic parameters would produce better performance.

  8. Spectrometry with high count rate for the study of the soft X-rays. Application for the plasma of WEGA

    International Nuclear Information System (INIS)

    Brouquet, P.

    1979-04-01

    The plasma of the WEGA torus, whose electron temperature varies between 0.5 and 1 keV, emits electromagnetic radiation extending to wavelengths of the order of 1A. Different improvements performed on a semi-conductor spectrometer have permitted the study of this emission in the soft X ray region (1 keV - 30 keV) at a count rate of 3.10 5 counts/s with an energy resolution of 350 eV. For each plasma shot, this diagnostic gives 4 measurements of the plasma electron temperature and of the effective charge, Zeff, with a time resolution of 5 ms. The values of the electron temperature and of the effective charge derived from the study of soft X rays are in agreement with those given by other diagnostic methods [fr

  9. [The effectiveness of error reporting promoting strategy on nurse's attitude, patient safety culture, intention to report and reporting rate].

    Science.gov (United States)

    Kim, Myoungsoo

    2010-04-01

    The purpose of this study was to examine the impact of strategies to promote reporting of errors on nurses' attitude to reporting errors, organizational culture related to patient safety, intention to report and reporting rate in hospital nurses. A nonequivalent control group non-synchronized design was used for this study. The program was developed and then administered to the experimental group for 12 weeks. Data were analyzed using descriptive analysis, X(2)-test, t-test, and ANCOVA with the SPSS 12.0 program. After the intervention, the experimental group showed significantly higher scores for nurses' attitude to reporting errors (experimental: 20.73 vs control: 20.52, F=5.483, p=.021) and reporting rate (experimental: 3.40 vs control: 1.33, F=1998.083, porganizational culture and intention to report. The study findings indicate that strategies that promote reporting of errors play an important role in producing positive attitudes to reporting errors and improving behavior of reporting. Further advanced strategies for reporting errors that can lead to improved patient safety should be developed and applied in a broad range of hospitals.

  10. Residents' Ratings of Their Clinical Supervision and Their Self-Reported Medical Errors: Analysis of Data From 2009.

    Science.gov (United States)

    Baldwin, DeWitt C; Daugherty, Steven R; Ryan, Patrick M; Yaghmour, Nicholas A; Philibert, Ingrid

    2018-04-01

    Medical errors and patient safety are major concerns for the medical and medical education communities. Improving clinical supervision for residents is important in avoiding errors, yet little is known about how residents perceive the adequacy of their supervision and how this relates to medical errors and other education outcomes, such as learning and satisfaction. We analyzed data from a 2009 survey of residents in 4 large specialties regarding the adequacy and quality of supervision they receive as well as associations with self-reported data on medical errors and residents' perceptions of their learning environment. Residents' reports of working without adequate supervision were lower than data from a 1999 survey for all 4 specialties, and residents were least likely to rate "lack of supervision" as a problem. While few residents reported that they received inadequate supervision, problems with supervision were negatively correlated with sufficient time for clinical activities, overall ratings of the residency experience, and attending physicians as a source of learning. Problems with supervision were positively correlated with resident reports that they had made a significant medical error, had been belittled or humiliated, or had observed others falsifying medical records. Although working without supervision was not a pervasive problem in 2009, when it happened, it appeared to have negative consequences. The association between inadequate supervision and medical errors is of particular concern.

  11. Attitudes of Mashhad Public Hospital's Nurses and Midwives toward the Causes and Rates of Medical Errors Reporting.

    Science.gov (United States)

    Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh

    2017-03-01

    Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer

  12. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    International Nuclear Information System (INIS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-01-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  13. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)

    2016-11-15

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  14. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    International Nuclear Information System (INIS)

    Chau, H.F.

    2002-01-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1√(5)≅27.6%, thereby making it the most error resistant scheme known to date

  15. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  16. Considering the role of time budgets on copy-error rates in material culture traditions: an experimental assessment.

    Science.gov (United States)

    Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J

    2014-01-01

    Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural 'mutation', can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such 'time constraints' affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D 'target handaxe form' using a standardized foam block and a plastic knife. Three distinct 'time conditions' were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific 'threshold' might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a 'threshold' effect, below which mutation rates increase more markedly. Our results also suggest that 'time budgets' available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, 'time-budgeting' factors need to be given greater consideration in evolutionary models of material culture change.

  17. Finding the right coverage : The impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates

    NARCIS (Netherlands)

    Fountain, Emily D.; Pauli, Jonathan N.; Reid, Brendan N.; Palsboll, Per J.; Peery, M. Zachariah

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown.

  18. Error resilient H.264/AVC Video over Satellite for low Packet Loss Rates

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren; Andersen, Jakob Dahl

    2007-01-01

    The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The ne...

  19. SNP discovery in nonmodel organisms: strand bias and base-substitution errors reduce conversion rates.

    Science.gov (United States)

    Gonçalves da Silva, Anders; Barendse, William; Kijas, James W; Barris, Wes C; McWilliam, Sean; Bunch, Rowan J; McCullough, Russell; Harrison, Blair; Hoelzel, A Rus; England, Phillip R

    2015-07-01

    Single nucleotide polymorphisms (SNPs) have become the marker of choice for genetic studies in organisms of conservation, commercial or biological interest. Most SNP discovery projects in nonmodel organisms apply a strategy for identifying putative SNPs based on filtering rules that account for random sequencing errors. Here, we analyse data used to develop 4723 novel SNPs for the commercially important deep-sea fish, orange roughy (Hoplostethus atlanticus), to assess the impact of not accounting for systematic sequencing errors when filtering identified polymorphisms when discovering SNPs. We used SAMtools to identify polymorphisms in a velvet assembly of genomic DNA sequence data from seven individuals. The resulting set of polymorphisms were filtered to minimize 'bycatch'-polymorphisms caused by sequencing or assembly error. An Illumina Infinium SNP chip was used to genotype a final set of 7714 polymorphisms across 1734 individuals. Five predictors were examined for their effect on the probability of obtaining an assayable SNP: depth of coverage, number of reads that support a variant, polymorphism type (e.g. A/C), strand-bias and Illumina SNP probe design score. Our results indicate that filtering out systematic sequencing errors could substantially improve the efficiency of SNP discovery. We show that BLASTX can be used as an efficient tool to identify single-copy genomic regions in the absence of a reference genome. The results have implications for research aiming to identify assayable SNPs and build SNP genotyping assays for nonmodel organisms. © 2014 John Wiley & Sons Ltd.

  20. Sharp Threshold Detection Based on Sup-norm Error rates in High-dimensional Models

    DEFF Research Database (Denmark)

    Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl

    focused almost exclusively on estimation errors in stronger norms. We show that this sup-norm bound can be used to distinguish between zero and non-zero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent...

  1. On the Symbol Error Rate of M-ary MPSK over Generalized Fading Channels with Additive Laplacian Noise

    KAUST Repository

    Soury, Hamza

    2015-01-07

    This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].

  2. On the symbol error rate of M-ary MPSK over generalized fading channels with additive Laplacian noise

    KAUST Repository

    Soury, Hamza

    2014-06-01

    This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox\\'s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.

  3. On the symbol error rate of M-ary MPSK over generalized fading channels with additive Laplacian noise

    KAUST Repository

    Soury, Hamza; Alouini, Mohamed-Slim

    2014-01-01

    This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox's H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.

  4. On the Symbol Error Rate of M-ary MPSK over Generalized Fading Channels with Additive Laplacian Noise

    KAUST Repository

    Soury, Hamza; Alouini, Mohamed-Slim

    2015-01-01

    This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].

  5. Determination of corrosion rate of reinforcement with a modulated guard ring electrode; analysis of errors due to lateral current distribution

    International Nuclear Information System (INIS)

    Wojtas, H.

    2004-01-01

    The main source of errors in measuring the corrosion rate of rebars on site is a non-uniform current distribution between the small counter electrode (CE) on the concrete surface and the large rebar network. Guard ring electrodes (GEs) are used in an attempt to confine the excitation current within a defined area. In order to better understand the functioning of modulated guard ring electrode and to assess its effectiveness in eliminating errors due to lateral spread of current signal from the small CE, measurements of the polarisation resistance performed on a concrete beam have been numerically simulated. Effect of parameters such as rebar corrosion activity, concrete resistivity, concrete cover depth and size of the corroding area on errors in the estimation of polarisation resistance of a single rebar has been examined. The results indicate that modulated GE arrangement fails to confine the lateral spread of the CE current within a constant area. Using the constant diameter of confinement for the calculation of corrosion rate may lead to serious errors when test conditions change. When high corrosion activity of rebar and/or local corrosion occur, the use of the modulated GE confinement may lead to significant underestimation of the corrosion rate

  6. Accelerated tests for the soft error rate determination of single radiation particles in components of terrestrial and avionic electronic systems

    International Nuclear Information System (INIS)

    Flament, O.; Baggio, J.

    2010-01-01

    This paper describes the main features of the accelerated test procedures used to determine reliability data of microelectronics devices used in terrestrial environment.This paper focuses on the high energy particle test that could be performed through spallation neutron source or quasi-mono-energetic neutron or proton. Improvements of standards are illustrated with respect to the state of the art of knowledge in radiation effects and scaling down of microelectronics technologies. (authors)

  7. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  8. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  9. Controlling type I error rate for fast track drug development programmes.

    Science.gov (United States)

    Shih, Weichung J; Ouyang, Peter; Quan, Hui; Lin, Yong; Michiels, Bart; Bijnens, Luc

    2003-03-15

    The U.S. Food and Drug Administration (FDA) Modernization Act of 1997 has a Section (No. 112) entitled 'Expediting Study and Approval of Fast Track Drugs' (the Act). In 1998, the FDA issued a 'Guidance for Industry: the Fast Track Drug Development Programs' (the FTDD programmes) to meet the requirement of the Act. The purpose of FTDD programmes is to 'facilitate the development and expedite the review of new drugs that are intended to treat serious or life-threatening conditions and that demonstrate the potential to address unmet medical needs'. Since then many health products have reached patients who suffered from AIDS, cancer, osteoporosis, and many other diseases, sooner by utilizing the Fast Track Act and the FTDD programmes. In the meantime several scientific issues have also surfaced when following the FTDD programmes. In this paper we will discuss the concept of two kinds of type I errors, namely, the 'conditional approval' and the 'final approval' type I errors, and propose statistical methods for controlling them in a new drug submission process. Copyright 2003 John Wiley & Sons, Ltd.

  10. Bit Error Rate Due to Misalignment of Earth Station Antenna Pointing to Satellite

    Directory of Open Access Journals (Sweden)

    Wahyu Pamungkas

    2010-04-01

    Full Text Available One problem causing reduction of energy in satellite communications system is the misalignment of earth station antenna pointing to satellite. Error in pointing would affect the quality of information signal to energy bit in earth station. In this research, error in pointing angle occurred only at receiver (Rx antenna, while the transmitter (Tx antennas precisely point to satellite. The research was conducted towards two satellites, namely TELKOM-1 and TELKOM-2. At first, measurement was made by directing Tx antenna precisely to satellite, resulting in an antenna pattern shown by spectrum analyzer. The output from spectrum analyzers is drawn with the right scale to describe swift of azimuth and elevation pointing angle towards satellite. Due to drifting from the precise pointing, it influenced the received link budget indicated by pattern antenna. This antenna pattern shows reduction of power level received as a result of pointing misalignment. As a conclusion, the increasing misalignment of pointing to satellite would affect in the reduction of received signal parameters link budget of down-link traffic.

  11. Dose optimization of intra-operative high dose rate interstitial brachytherapy implants for soft tissue sarcoma

    Directory of Open Access Journals (Sweden)

    Jamema Swamidas

    2009-01-01

    Full Text Available Objective : A three dimensional (3D image-based dosimetric study to quantitatively compare geometric vs. dose-point optimization in combination with graphical optimization for interstitial brachytherapy of soft tissue sarcoma (STS. Materials and Methods : Fifteen consecutive STS patients, treated with intra-operative, interstitial Brachytherapy, were enrolled in this dosimetric study. Treatment plans were generated using dose points situated at the "central plane between the catheters", "between the catheters throughout the implanted volume", at "distances perpendicular to the implant axis" and "on the surface of the target volume" Geometrically optimized plans had dose points defined between the catheters, while dose-point optimized plans had dose points defined at a plane perpendicular to the implant axis and on the target surface. Each plan was graphically optimized and compared using dose volume indices. Results : Target coverage was suboptimal with coverage index (CI = 0.67 when dose points were defined at the central plane while it was superior when the dose points were defined at the target surface (CI=0.93. The coverage of graphically optimized plans (GrO was similar to non-GrO with dose points defined on surface or perpendicular to the implant axis. A similar pattern was noticed with conformity index (0.61 vs. 0.82. GrO were more conformal and less homogeneous compared to non-GrO. Sum index was superior for dose points defined on the surface of the target and relatively inferior for plans with dose points at other locations (1.35 vs. 1.27. Conclusions : Optimization with dose points defined away from the implant plane and on target results in superior target coverage with optimal values of other indices. GrO offer better target coverage for implants with non-uniform geometry and target volume.

  12. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  13. A rate-jump method for characterization of soft tissues using nanoindentation techniques

    KAUST Repository

    Tang, Bin; Ngan, Alfonso H W

    2012-01-01

    routinely used to analyze experimental data. In this article, a novel rate-jump protocol for treating viscoelasticity in nanomechanical data analysis is described. © 2012 The Royal Society of Chemistry.

  14. Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link

    Directory of Open Access Journals (Sweden)

    Berioli Matteo

    2007-01-01

    Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.

  15. Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link

    Directory of Open Access Journals (Sweden)

    Matteo Berioli

    2007-05-01

    Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.

  16. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction

    Directory of Open Access Journals (Sweden)

    Boulesteix Anne-Laure

    2009-12-01

    Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.

  17. Standardized error severity score (ESS) ratings to quantify risk associated with child restraint system (CRS) and booster seat misuse.

    Science.gov (United States)

    Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley

    2017-11-17

    Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation

  18. Linear transceiver design for nonorthogonal amplify-and-forward protocol using a bit error rate criterion

    KAUST Repository

    Ahmed, Qasim Zeeshan; Park, Kihong; Alouini, Mohamed-Slim; Aï ssa, Sonia

    2014-01-01

    The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network

  19. Relationship of Employee Attitudes and Supervisor-Controller Ratio to En Route Operational Error Rates

    National Research Council Canada - National Science Library

    Broach, Dana

    2002-01-01

    ...; Rodgers, Mogford, Mogford, 1998). In this study, the relationship of organizational factors to en route OE rates was investigated, based on an adaptation of the Human Factors Analysis and Classification System (HFACS; Shappell & Wiegmann 2000...

  20. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  1. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    Science.gov (United States)

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Bit Error-Rate Minimizing Detector for Amplify-and-Forward Relaying Systems Using Generalized Gaussian Kernel

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-01-01

    In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.

  3. Reducing Error Rates for Iris Image using higher Contrast in Normalization process

    Science.gov (United States)

    Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa

    2017-08-01

    Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.

  4. Data-driven soft sensor design with multiple-rate sampled data

    DEFF Research Database (Denmark)

    Lin, Bao; Recke, Bodil; Knudsen, Jørgen K.H.

    2007-01-01

    Multi-rate systems are common in industrial processes where quality measurements have slower sampling rate than other process variables. Since inter-sample information is desirable for effective quality control, different approaches have been reported to estimate the quality between samples......, including numerical interpolation, polynomial transformation, data lifting and weighted partial least squares (WPLS). Two modifications to the original data lifting approach are proposed in this paper: reformulating the extraction of a fast model as an optimization problem and ensuring the desired model...... properties through Tikhonov Regularization. A comparative investigation of the four approaches is performed in this paper. Their applicability, accuracy and robustness to process noise are evaluated on a single-input single output (SISO) system. The regularized data lifting and WPLS approaches...

  5. Error associated with model predictions of wildland fire rate of spread

    Science.gov (United States)

    Miguel G. Cruz; Martin E. Alexander

    2015-01-01

    How well can we expect to predict the spread rate of wildfires and prescribed fires? The degree of accuracy in model predictions of wildland fire behaviour characteristics are dependent on the model's applicability to a given situation, the validity of the model's relationships, and the reliability of the model input data (Alexander and Cruz 2013b#. We...

  6. Error-free 5.1 Tbit/s data generation on a single-wavelength channel using a 1.28 Tbaud symbol rate

    DEFF Research Database (Denmark)

    Mulvad, Hans Christian Hansen; Galili, Michael; Oxenløwe, Leif Katsuo

    2009-01-01

    We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER......We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER...

  7. Discrete polyphase matched filtering-based soft timing estimation for mobile wireless systems

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft...

  8. Reproducibility of the pink esthetic score--rating soft tissue esthetics around single-implant restorations with regard to dental observer specialization.

    Science.gov (United States)

    Gehrke, Peter; Lobert, Markus; Dhom, Günter

    2008-01-01

    The pink esthetic score (PES) evaluates the esthetic outcome of soft tissue around implant-supported single crowns in the anterior zone by awarding seven points for the mesial and distal papilla, soft-tissue level, soft-tissue contour, soft-tissue color, soft-tissue texture, and alveolar process deficiency. The aim of this study was to measure the reproducibility of the PES and assess the influence exerted by the examiner's degree of dental specialization. Fifteen examiners (three general dentists, three oral maxillofacial surgeons, three orthodontists, three postgraduate students in implant dentistry, and three lay people) applied the PES to 30 implant-supported single restorations twice at an interval of 4 weeks. Using a 0-1-2 scoring system, 0 being the lowest, 2 being the highest value, the maximum achievable PES was 14. At the second assessment, the photographs were scored in reverse order. Differences between the two assessments were evaluated with the Spearman's rank correlation coefficient (R). The Wilcoxon signed-rank test was used for comparisons of differences between the ratings. A significance level of p esthetic restorations showed the smallest deviations. Orthodontists were found to have assigned significantly poorer ratings than any other group. The assessment of postgraduate students and laypersons were the most favorable. The PES allows for a more objective appraisal of the esthetic short- and long-term results of various surgical and prosthetic implant procedures. It reproducibly evaluates the peri-implant soft tissue around single-implant restorations and results in good intra-examiner agreement. However, an effect of observer specialization on rating soft-tissue esthetics can be shown.

  9. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  10. A Simulation Analysis of Errors in the Measurement of Standard Electrochemical Rate Constants from Phase-Selective Impedance Data.

    Science.gov (United States)

    1987-09-30

    RESTRICTIVE MARKINGSC Unclassif ied 2a SECURIly CLASSIFICATION ALIIMOA4TY 3 DIS1RSBj~jiOAVAILAB.I1Y OF RkPORI _________________________________ Approved...of the AC current, including the time dependence at a growing DME, at a given fixed potential either in the presence or the absence of an...the relative error in k b(app) is ob relatively small for ks (true) : 0.5 cm s-, and increases rapidly for ob larger rate constants as kob reaches the

  11. Rates of medical errors and preventable adverse events among hospitalized children following implementation of a resident handoff bundle.

    Science.gov (United States)

    Starmer, Amy J; Sectish, Theodore C; Simon, Dennis W; Keohane, Carol; McSweeney, Maireade E; Chung, Erica Y; Yoon, Catherine S; Lipsitz, Stuart R; Wassner, Ari J; Harper, Marvin B; Landrigan, Christopher P

    2013-12-04

    Handoff miscommunications are a leading cause of medical errors. Studies comprehensively assessing handoff improvement programs are lacking. To determine whether introduction of a multifaceted handoff program was associated with reduced rates of medical errors and preventable adverse events, fewer omissions of key data in written handoffs, improved verbal handoffs, and changes in resident-physician workflow. Prospective intervention study of 1255 patient admissions (642 before and 613 after the intervention) involving 84 resident physicians (42 before and 42 after the intervention) from July-September 2009 and November 2009-January 2010 on 2 inpatient units at Boston Children's Hospital. Resident handoff bundle, consisting of standardized communication and handoff training, a verbal mnemonic, and a new team handoff structure. On one unit, a computerized handoff tool linked to the electronic medical record was introduced. The primary outcomes were the rates of medical errors and preventable adverse events measured by daily systematic surveillance. The secondary outcomes were omissions in the printed handoff document and resident time-motion activity. Medical errors decreased from 33.8 per 100 admissions (95% CI, 27.3-40.3) to 18.3 per 100 admissions (95% CI, 14.7-21.9; P < .001), and preventable adverse events decreased from 3.3 per 100 admissions (95% CI, 1.7-4.8) to 1.5 (95% CI, 0.51-2.4) per 100 admissions (P = .04) following the intervention. There were fewer omissions of key handoff elements on printed handoff documents, especially on the unit that received the computerized handoff tool (significant reductions of omissions in 11 of 14 categories with computerized tool; significant reductions in 2 of 14 categories without computerized tool). Physicians spent a greater percentage of time in a 24-hour period at the patient bedside after the intervention (8.3%; 95% CI 7.1%-9.8%) vs 10.6% (95% CI, 9.2%-12.2%; P = .03). The average duration of verbal

  12. A web-based team-oriented medical error communication assessment tool: development, preliminary reliability, validity, and user ratings.

    Science.gov (United States)

    Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas

    2011-01-01

    Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.

  13. Soft Skills in Higher Education: Importance and Improvement Ratings as a Function of Individual Differences and Academic Performance

    Science.gov (United States)

    Chamorro-Premuzic, Tomas; Arteche, Adriane; Bremner, Andrew J.; Greven, Corina; Furnham, Adrian

    2010-01-01

    Three UK studies on the relationship between a purpose-built instrument to assess the importance and development of 15 "soft skills" are reported. "Study 1" (N = 444) identified strong latent components underlying these soft skills, such that differences "between-skills" were over-shadowed by differences…

  14. Structure analysis of tax revenue and inflation rate in Banda Aceh using vector error correction model with multiple alpha

    Science.gov (United States)

    Sofyan, Hizir; Maulia, Eva; Miftahuddin

    2017-11-01

    A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).

  15. The dynamic effect of exchange-rate volatility on Turkish exports: Parsimonious error-correction model approach

    Directory of Open Access Journals (Sweden)

    Demirhan Erdal

    2015-01-01

    Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.

  16. Error rate on the director's task is influenced by the need to take another's perspective but not the type of perspective.

    Science.gov (United States)

    Legg, Edward W; Olivier, Laure; Samuel, Steven; Lurz, Robert; Clayton, Nicola S

    2017-08-01

    Adults are prone to responding erroneously to another's instructions based on what they themselves see and not what the other person sees. Previous studies have indicated that in instruction-following tasks participants make more errors when required to infer another's perspective than when following a rule. These inference-induced errors may occur because the inference process itself is error-prone or because they are a side effect of the inference process. Crucially, if the inference process is error-prone, then higher error rates should be found when the perspective to be inferred is more complex. Here, we found that participants were no more error-prone when they had to judge how an item appeared (Level 2 perspective-taking) than when they had to judge whether an item could or could not be seen (Level 1 perspective-taking). However, participants were more error-prone in the perspective-taking variants of the task than in a version that only required them to follow a rule. These results suggest that having to represent another's perspective induces errors when following their instructions but that error rates are not directly linked to errors in inferring another's perspective.

  17. Impact of automated dispensing cabinets on medication selection and preparation error rates in an emergency department: a prospective and direct observational before-and-after study.

    Science.gov (United States)

    Fanning, Laura; Jones, Nick; Manias, Elizabeth

    2016-04-01

    The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.

  18. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    Science.gov (United States)

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  19. Reply: Birnbaum's (2012 statistical tests of independence have unknown Type-I error rates and do not replicate within participant

    Directory of Open Access Journals (Sweden)

    Yun-shil Cha

    2013-01-01

    Full Text Available Birnbaum (2011, 2012 questioned the iid (independent and identically distributed sampling assumptions used by state-of-the-art statistical tests in Regenwetter, Dana and Davis-Stober's (2010, 2011 analysis of the ``linear order model''. Birnbaum (2012 cited, but did not use, a test of iid by Smith and Batchelder (2008 with analytically known properties. Instead, he created two new test statistics with unknown sampling distributions. Our rebuttal has five components: 1 We demonstrate that the Regenwetter et al. data pass Smith and Batchelder's test of iid with flying colors. 2 We provide evidence from Monte Carlo simulations that Birnbaum's (2012 proposed tests have unknown Type-I error rates, which depend on the actual choice probabilities and on how data are coded as well as on the null hypothesis of iid sampling. 3 Birnbaum analyzed only a third of Regenwetter et al.'s data. We show that his two new tests fail to replicate on the other two-thirds of the data, within participants. 4 Birnbaum selectively picked data of one respondent to suggest that choice probabilities may have changed partway into the experiment. Such nonstationarity could potentially cause a seemingly good fit to be a Type-II error. We show that the linear order model fits equally well if we allow for warm-up effects. 5 Using hypothetical data, Birnbaum (2012 claimed to show that ``true-and-error'' models for binary pattern probabilities overcome the alleged short-comings of Regenwetter et al.'s approach. We disprove this claim on the same data.

  20. Sporadic error probability due to alpha particles in dynamic memories of various technologies

    International Nuclear Information System (INIS)

    Edwards, D.G.

    1980-01-01

    The sensitivity of MOS memory components to errors induced by alpha particles is expected to increase with integration level. The soft error rate of a 65-kbit VMOS memory has been compared experimentally with that of three field-proven 16-kbit designs. The technological and design advantages of the VMOS RAM ensure an error rate which is lower than those of the 16-kbit memories. Calculation of the error probability for the 65-kbit RAM and comparison with the measurements show that for large duty cycles single particle hits lead to sensing errors and for small duty cycles cell errors caused by multiple hits predominate. (Auth.)

  1. Comparison of the effect of paper and computerized procedures on operator error rate and speed of performance

    International Nuclear Information System (INIS)

    Converse, S.A.; Perez, P.B.; Meyer, S.; Crabtree, W.

    1994-01-01

    The Computerized Procedures Manual (COPMA-II) is an advanced procedure manual that can be used to select and execute procedures, to monitor the state of plant parameters, and to help operators track their progress through plant procedures. COPMA-II was evaluated in a study that compared the speed and accuracy of operators' performance when they performed with COPMA-II and traditional paper procedures. Sixteen licensed reactor operators worked in teams of two to operate the Scales Pressurized Water Reactor Facility at North Carolina State University. Each team performed one change of power with each type of procedure to simulate performance under normal operating conditions. Teams then performed one accident scenario with COPMA-II and one with paper procedures. Error rates, performance times, and subjective estimates of workload were collected, and were evaluated for each combination of procedure type and scenario type. For the change of power task, accuracy and response time were not different for COPMA-II and paper procedures. Operators did initiate responses to both accident scenarios fastest with paper procedures. However, procedure type did not moderate response completion time for either accident scenario. For accuracy, performance with paper procedures resulted in twice as many errors as did performance with COPMA-II. Subjective measures of mental workload for the accident scenarios were not affected by procedure type

  2. Inflation of type I error rates by unequal variances associated with parametric, nonparametric, and Rank-Transformation Tests

    Directory of Open Access Journals (Sweden)

    Donald W. Zimmerman

    2004-01-01

    Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.

  3. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities.

    Science.gov (United States)

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-03-01

    Sound is among the significant environmental factors for people's health, and it has an important role in both physical and psychological injuries, and it also affects individuals' performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant's performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). This study found that a sound level of 110 dB had an important effect on the individuals' performances, i.e., the performances were decreased.

  4. Effect of soft drinks on proximal plaque pH at normal and low salivary secretion rates.

    Science.gov (United States)

    Johansson, Ann-Katrin; Lingström, Peter; Birkhed, Dowen

    2007-11-01

    The aim of this study was to investigate the effect of different types of drinks on plaque pH during normal and drug-induced low salivary secretion rates. Three drinks were tested in 10 healthy adult subjects: 1) Coca-Cola regular, 2) Coca-Cola light, and 3) fresh orange juice. pH was measured in the maxillary incisor and premolar region with the microtouch method. The area under the pH curve (AUC) was calculated. During normal salivary condition, mouth-rinsing with Coca-Cola regular resulted in a slightly more pronounced drop in pH during the first few minutes than it did with orange juice. After this initial phase, both products showed similar and relatively slow pH recovery. Coca-Cola light also resulted in low pH values during the very first minutes, but thereafter in a rapid recovery back to baseline. During dry mouth conditions, the regular Cola drink showed a large initial drop in pH, and slightly more pronounced than for orange juice. After the initial phase, both products had a similar and slow recovery back to baseline. At most time-points, AUC was significantly greater in dry conditions compared to normal conditions for Coca-Cola regular and orange juice, but not for Coca-Cola light. Coca-Cola light generally showed a significantly smaller AUC than Coca-Cola regular and orange juice. The main conclusion from this study is that a low salivary secretion rate may accentuate the fall in pH in dental plaque after gentle mouth-rinsing with soft drinks.

  5. Low-dose-rate intraoperative brachytherapy combined with external beam irradiation in the conservative treatment of soft tissue sarcoma

    International Nuclear Information System (INIS)

    Delannes, M.; Thomas, L.; Martel, P.; Bonnevialle, P.; Stoeckle, E.; Chevreau, Ch.; Bui, B.N.; Daly-Schveitzer, N.; Pigneux, J.; Kantor, G.

    2000-01-01

    Purpose: Conservative treatment of soft tissue sarcomas most often implies combination of surgical resection and irradiation. The aim of this study was to evaluate low-dose-rate intraoperative brachytherapy, delivered as a boost, in the local control of primary tumors, with special concern about treatment complications. Methods and Materials: Between 1986 and 1995, 112 patients underwent intraoperative implant. This report focuses on the group of 58 patients with primary sarcomas treated by combination of conservative surgery, intraoperative brachytherapy, and external irradiation. Most of the tumors were located in the lower limbs (46/58--79%). Median size of the tumor was 10 cm, most of the lesions being T2-T3 (51/58--88%), Grade 2 or 3 (48/58--83%). The mean brachytherapy dose was 20 Gy and external beam irradiation dose 45 Gy. In 36/58 cases, iridium wires had to be placed on contact with neurovascular structures. Results: With a median follow-up of 54 months, the 5-year actuarial survival was 64.9%, with a 5-year actuarial local control of 89%. Of the 6 patients with local relapse, 3 were salvaged. Acute side effects, essentially wound healing problems, occurred in 20/58 patients, late side effects in 16/58 patients (7 neuropathies G2 to G4). No amputation was required. The only significant factor correlated with early side effects was the location of the tumor in the lower limb (p = 0.003), and with late side effects the vicinity of the tumor with neurovascular structures (p = 0.009). Conclusion: Brachytherapy allows early delivery of a boost dose in a reduced volume of tissue, precisely mapped by the intraoperative procedure. Combined with external beam irradiation, it is a safe and efficient treatment technique leading to high local control rates and limited functional impairment

  6. Bit Error Rate Performance Analysis of a Threshold-Based Generalized Selection Combining Scheme in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    Kousa Maan

    2005-01-01

    Full Text Available The severity of fading on mobile communication channels calls for the combining of multiple diversity sources to achieve acceptable error rate performance. Traditional approaches perform the combining of the different diversity sources using either the conventional selective diversity combining (CSC, equal-gain combining (EGC, or maximal-ratio combining (MRC schemes. CSC and MRC are the two extremes of compromise between performance quality and complexity. Some researches have proposed a generalized selection combining scheme (GSC that combines the best branches out of the available diversity resources ( . In this paper, we analyze a generalized selection combining scheme based on a threshold criterion rather than a fixed-size subset of the best channels. In this scheme, only those diversity branches whose energy levels are above a specified threshold are combined. Closed-form analytical solutions for the BER performances of this scheme over Nakagami fading channels are derived. We also discuss the merits of this scheme over GSC.

  7. Limb sparing surgery and boost with high dose rate interstitial brachytherapy in treatment of soft tissue sarcoma

    International Nuclear Information System (INIS)

    Koike, P.; Miziara, M.; Soares, C.; Fogaroli, R.; Baraldi, H.; Pellizoni, A.; Borba, G.

    2003-01-01

    Soft tissue sarcoma, a rare neoplasia with high aggressively, accounts for approximately 0,7% of the malignant tumors and occurs more often with youngest. Because of the potential risk of local recurrence, theoretically surgical resection, encompassing macroscopic tumor with a margin of macroscopically noninvolved tissue is the right way to perform, with a wide 'en bloc' resection, amputation, with bad functional results. The aim of conservative treatment is combined modality therapy as surgical resection and irradiation to obtain a local control rate as high as possible while preserving functional results. A retrospective review of 31 patients treated with high dose rate (HDR) Brachytherapy in the Radiotherapy Department Arnaldo Vieira de Carvalho Cancer Institute. Methods: Between April 1995 and August 1999, 31 patients who underwent a combine therapy; examined the results on multivariate analysis of conservative surgery and brachytherapy follow/or not by external beam radiation (EBRT). The 31 patients treated, 17 ( 54,8%) females and 14(45,2%) males have a median age of 48 years ( range,19 to 77 years). Most of the tumors was located in the lower limb (17/31 - 54,8%) . The other sites were the upper limb (10/31-32,3%), thoracic wall and abdomen (3/31-9,7%).Classification of the tumors, according to the International Union Against Cancer (UICC) staging was T1 5 patients (16%), T2 (24/31-77%). Median size of the tumors was 9,2cm ( ranged, 2,5 to 24cm). Most of the tumors being malignant fibrous histiocytomas (9/31-29%) and the histological grade II (14/31-45%). Twenty-two (71%) patients had intraoperative implants and the insertion of the radioactive source was delayed 24 to 120 hours. Eight patients (25,8%) had postoperative and received HDRB 45 to 60 days after the surgery . Guide needles were placed in the tumor bed, perpendicular to the scar, systematically in a single plane, the implant volume being defined by radiotherapist . A minimum safety margin of 2 cm

  8. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Directory of Open Access Journals (Sweden)

    Philip J Kellman

    Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert

  9. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Science.gov (United States)

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  10. The Differences in Error Rate and Type between IELTS Writing Bands and Their Impact on Academic Workload

    Science.gov (United States)

    Müller, Amanda

    2015-01-01

    This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…

  11. Preoperative Radiotherapy and Wide Resection for Soft Tissue Sarcomas: Achieving a Low Rate of Major Wound Complications with the Use of Flaps. Results of a Single Surgical Team.

    Science.gov (United States)

    Chan, Lester Wai Mon; Imanishi, Jungo; Grinsell, Damien Glen; Choong, Peter

    2017-01-01

    Surgery in combination with radiotherapy (RT) has become the standard of care for most soft tissue sarcomas. The choice between pre- and postoperative RT is controversial. Preoperative RT is associated with a 32-35% rate of major wound complications (MWC) and 16-25% rate of reoperation. The role of vascularized soft tissue "flaps" in reducing complications is unclear. We report the outcomes of patients treated with preoperative RT, resection, and flap reconstruction. 122 treatment episodes involving 117 patients were retrospectively reviewed. All patients were treated with 50.4 Gy of external beam radiation. Surgery was performed at 4-8 weeks after completion of RT by the same combination of orthopedic oncology and plastic reconstructive surgeon. Defects were reconstructed with 64 free and 59 pedicled/local flaps. 30 (25%) patients experienced a MWC and 17 (14%) required further surgery. 20% of complications were exclusively related to the donor site. There was complete or partial loss of three flaps. There was no difference in the rate of MWC or reoperation for complications with respect to age, sex, tumor site, previous unplanned excision, tumor grade, depth, and type of flap. Tumor size ≥8 cm was associated with a higher rate of reoperation (11/44 vs 6/78; P  = 0.008) but the rate of MWC was not significant (16/44 vs 14/78; P  = 0.066). The use of soft tissue flaps is associated with a low rate of MWC and reoperation. Our results suggest that a high rate of flap usage may be required to observe a reduction in complication rates.

  12. Preoperative Radiotherapy and Wide Resection for Soft Tissue Sarcomas: Achieving a Low Rate of Major Wound Complications with the Use of Flaps. Results of a Single Surgical Team

    Directory of Open Access Journals (Sweden)

    Lester Wai Mon Chan

    2018-01-01

    Full Text Available BackgroundSurgery in combination with radiotherapy (RT has become the standard of care for most soft tissue sarcomas. The choice between pre- and postoperative RT is controversial. Preoperative RT is associated with a 32–35% rate of major wound complications (MWC and 16–25% rate of reoperation. The role of vascularized soft tissue “flaps” in reducing complications is unclear. We report the outcomes of patients treated with preoperative RT, resection, and flap reconstruction.Patients and methods122 treatment episodes involving 117 patients were retrospectively reviewed. All patients were treated with 50.4 Gy of external beam radiation. Surgery was performed at 4–8 weeks after completion of RT by the same combination of orthopedic oncology and plastic reconstructive surgeon. Defects were reconstructed with 64 free and 59 pedicled/local flaps.Results30 (25% patients experienced a MWC and 17 (14% required further surgery. 20% of complications were exclusively related to the donor site. There was complete or partial loss of three flaps. There was no difference in the rate of MWC or reoperation for complications with respect to age, sex, tumor site, previous unplanned excision, tumor grade, depth, and type of flap. Tumor size ≥8 cm was associated with a higher rate of reoperation (11/44 vs 6/78; P = 0.008 but the rate of MWC was not significant (16/44 vs 14/78; P = 0.066.ConclusionThe use of soft tissue flaps is associated with a low rate of MWC and reoperation. Our results suggest that a high rate of flap usage may be required to observe a reduction in complication rates.

  13. Glass interface effect on high-strain-rate tensile response of a soft polyurethane elastomeric polymer material

    NARCIS (Netherlands)

    Fan, J.T.; Weerheijm, J.; Sluys, L.J.

    2015-01-01

    The glass interface effect on dynamic tensile response of a soft polyurethane elastomeric polymer material has been investigated by subjecting a glass-polymer system of this polymer material matrix embedded a single 3 mm-diameter glass particle to impact loading in a split Hopkinson tension bar

  14. Soft X-ray sources and their optical counterparts in the error box of the COS-B source 2CG 135+01

    Energy Technology Data Exchange (ETDEWEB)

    Caraveo, P A; Bignami, G F [Consiglio Nazionale delle Ricerche, Milan (Italy). Lab. di Fisica Cosmica e Tecnologie Relative; Paul, J A [CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France). Section d' Astrophysique; Marano, B [Bologna Univ. (Italy). Ist. di Astronomia; Vettolani, G P [Consiglio Nazionale delle Ricerche, Bologna (Italy). Lab. di Radioastronomia

    1981-01-01

    We shall present here the Einstein observations for the 2CG 135+01 region where the results are complete in the sense that we have a satisfactory coverage of the COS-B error box and, more important, that all the IPC sources found have been identified, through both HRI and optical observations. In particular, the new spectral classifications of the present work were obtained at the Lojano Observatory (Bologna, Italy) with the Boller and Chivens spectrograph at the Cassegrain focus of the 1.52 in telescope. The spectral dispersion is 80 A/mm.

  15. Comparison of responses of thermoluminescent dosemeters irradiated by soft x-rays at very low and very high dose rate levels

    International Nuclear Information System (INIS)

    Pietrikova-Farnikova, M.; Krasa, J.; Juha, L.

    1994-01-01

    Recent great progress in construction and application of bright sources of soft X-rays gave a strong impetus for the development of methods of their dosimetric diagnostics. The soft X-ray sources are primarily represented by synchrotron radiation sources and by sources based on laser-produced plasma, including X-ray lasers. Their characteristics spread over a very wide region of photon energies, peak and average powers and densities. From our preliminary experiments it follows that thermoluminescent dosemeters can serve as a suitable tool for the determination of these characteristics. Problem lies in the fact that routine use of the thermoluminescent dosemeters for the dosimetry of soft X-rays requires their spectral calibration, which can be carried out with low peak power sources (synchrotron radiation and radionuclide sources). On the contrary, many important sources, especially these based on laser-produced plasmas, exhibit a very high peak power, i.e. dosemeters are irradiated at extremely high dose rate. In comparative experiments carried out with laser-produced plasmas and radionuclides using TLD 200 (CaF 2 :Dy) and GR 200A (LiF:Mg,Cu,P) it was satisfactorily proven that total thermoluminescent signals are independent of the dose rate. Dependence of glow curve shapes on the dose, dose rate and photon energy were equally determined

  16. Pyrosequencing as a tool for the detection of Phytophthora species: error rate and risk of false Molecular Operational Taxonomic Units.

    Science.gov (United States)

    Vettraino, A M; Bonants, P; Tomassini, A; Bruni, N; Vannini, A

    2012-11-01

    To evaluate the accuracy of pyrosequencing for the description of Phytophthora communities in terms of taxa identification and risk of assignment for false Molecular Operational Taxonomic Units (MOTUs). Pyrosequencing of Internal Transcribed Spacer 1 (ITS1) amplicons was used to describe the structure of a DNA mixture comprising eight Phytophthora spp. and Pythium vexans. Pyrosequencing resulted in 16 965 reads, detecting all species in the template DNA mixture. Reducing the ITS1 sequence identity threshold resulted in a decrease in numbers of unmatched reads but a concomitant increase in the numbers of false MOTUs. The total error rate was 0·63% and comprised mainly mismatches (0·25%) Pyrosequencing of ITS1 region is an efficient and accurate technique for the detection and identification of Phytophthora spp. in environmental samples. However, the risk of allocating false MOTUs, even when demonstrated to be low, may require additional validation with alternative detection methods. Phytophthora spp. are considered among the most destructive groups of invasive plant pathogens, affecting thousands of cultivated and wild plants worldwide. Simultaneous early detection of Phytophthora complexes in environmental samples offers an unique opportunity for the interception of known and unknown species along pathways of introduction, along with the identification of these organisms in invaded environments. © 2012 The Authors Letters in Applied Microbiology © 2012 The Society for Applied Microbiology.

  17. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    Science.gov (United States)

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  18. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    Science.gov (United States)

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  19. The effect of speaking rate on serial-order sound-level errors in normal healthy controls and persons with aphasia.

    Science.gov (United States)

    Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I

    Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions

  20. The benefits of soft sensor and multi-rate control for the implementation of Wireless Networked Control Systems.

    Science.gov (United States)

    Mansano, Raul K; Godoy, Eduardo P; Porto, Arthur J V

    2014-12-18

    Recent advances in wireless networking technology and the proliferation of industrial wireless sensors have led to an increasing interest in using wireless networks for closed loop control. The main advantages of Wireless Networked Control Systems (WNCSs) are the reconfigurability, easy commissioning and the possibility of installation in places where cabling is impossible. Despite these advantages, there are two main problems which must be considered for practical implementations of WNCSs. One problem is the sampling period constraint of industrial wireless sensors. This problem is related to the energy cost of the wireless transmission, since the power supply is limited, which precludes the use of these sensors in several closed-loop controls. The other technological concern in WNCS is the energy efficiency of the devices. As the sensors are powered by batteries, the lowest possible consumption is required to extend battery lifetime. As a result, there is a compromise between the sensor sampling period, the sensor battery lifetime and the required control performance for the WNCS. This paper develops a model-based soft sensor to overcome these problems and enable practical implementations of WNCSs. The goal of the soft sensor is generating virtual data allowing an actuation on the process faster than the maximum sampling period available for the wireless sensor. Experimental results have shown the soft sensor is a solution to the sampling period constraint problem of wireless sensors in control applications, enabling the application of industrial wireless sensors in WNCSs. Additionally, our results demonstrated the soft sensor potential for implementing energy efficient WNCS through the battery saving of industrial wireless sensors.

  1. The Benefits of Soft Sensor and Multi-Rate Control for the Implementation of Wireless Networked Control Systems

    Directory of Open Access Journals (Sweden)

    Raul K. Mansano

    2014-12-01

    Full Text Available Recent advances in wireless networking technology and the proliferation of industrial wireless sensors have led to an increasing interest in using wireless networks for closed loop control. The main advantages of Wireless Networked Control Systems (WNCSs are the reconfigurability, easy commissioning and the possibility of installation in places where cabling is impossible. Despite these advantages, there are two main problems which must be considered for practical implementations of WNCSs. One problem is the sampling period constraint of industrial wireless sensors. This problem is related to the energy cost of the wireless transmission, since the power supply is limited, which precludes the use of these sensors in several closed-loop controls. The other technological concern in WNCS is the energy efficiency of the devices. As the sensors are powered by batteries, the lowest possible consumption is required to extend battery lifetime. As a result, there is a compromise between the sensor sampling period, the sensor battery lifetime and the required control performance for the WNCS. This paper develops a model-based soft sensor to overcome these problems and enable practical implementations of WNCSs. The goal of the soft sensor is generating virtual data allowing an actuation on the process faster than the maximum sampling period available for the wireless sensor. Experimental results have shown the soft sensor is a solution to the sampling period constraint problem of wireless sensors in control applications, enabling the application of industrial wireless sensors in WNCSs. Additionally, our results demonstrated the soft sensor potential for implementing energy efficient WNCS through the battery saving of industrial wireless sensors.

  2. Local Control Rates of Metastatic Renal Cell Carcinoma (RCC) to Thoracic, Abdominal, and Soft Tissue Lesions Using Stereotactic Body Radiotherapy (SBRT)

    International Nuclear Information System (INIS)

    Altoos, Basel; Amini, Arya; Yacoub, Muthanna; Bourlon, Maria T.; Kessler, Elizabeth E.; Flaig, Thomas W.; Fisher, Christine M.; Kavanagh, Brian D.; Lam, Elaine T.; Karam, Sana D.

    2015-01-01

    We report the radiographic response rate of SBRT compared to conventional fractionated radiotherapy (CF-EBRT) for thoracic, abdominal, skin and soft tissue RCC lesions treated at our institution. Fifty three lesions where included in the study (36 SBRT, 17 CF-EBRT), treated from 2004 to 2014 at our institution. We included patients that had thoracic, skin & soft tissue (SST), and abdominal metastases of histologically confirmed RCC. The most common SBRT fractionation was 50 Gy in 5 fractions. The median time of follow-up was 16 months (range 3–97 months). Median BED was 216.67 (range 66.67–460.0) for SBRT, and 60 (range 46.67–100.83) for CF-EBRT. Median radiographic local control rates at 12, 24, and 36 months were 100, 93.41, and 93.41 % for lesions treated with SBRT versus 62.02, 35.27 and 35.27 % for those treated with CF-EBRT (p < 0.001). Predictive factors for radiographic local control under univariate analysis included BED ≥ 100 Gy (HR, 0.048; 95 % CI, 0.006–0.382; p = 0.005), dose per fraction ≥ 9 Gy (HR, 0.631; 95 % CI, 0.429–0.931; p = 0.021), and gender (HR, 0.254; 95 % CI, 0.066–0.978; p = 0.048). Under multivariate analysis, there were no significant predictors for local control. Toxicity rates were low and equivalent in both groups, with no grade 4 or 5 side effects reported. SBRT is safe and effective for the treatment of RCC metastases to thoracic, abdominal and integumentary soft tissues. Radiographic response rates were greater and more durable using SBRT compared to CF-EBRT. Further prospective trials are needed to evaluate efficacy and safety of SBRT for RCC metastases

  3. Bit-error-rate performance analysis of self-heterodyne detected radio-over-fiber links using phase and intensity modulation

    DEFF Research Database (Denmark)

    Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso

    2010-01-01

    We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self...

  4. Downlink Error Rates of Half-duplex Users in Full-duplex Networks over a Laplacian Inter-User Interference Limited and EGK fading

    KAUST Repository

    Soury, Hamza; Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have

  5. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations

    NARCIS (Netherlands)

    Derks, E. M.; Zwinderman, A. H.; Gamazon, E. R.

    2017-01-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates;

  6. High energy hadron-induced errors in memory chips

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, R.J. [University of Colorado, Boulder, CO (United States)

    2001-09-01

    We have measured probabilities for proton, neutron and pion beams from accelerators to induce temporary or soft errors in a wide range of modern 16 Mb and 64 Mb dRAM memory chips, typical of those used in aircraft electronics. Relations among the cross sections for these particles are deduced, and failure rates for aircraft avionics due to cosmic rays are evaluated. Measurement of alpha pha particle yields from pions on aluminum, as a surrogate for silicon, indicate that these reaction products are the proximate cause of the charge deposition resulting in errors. Heavy ions can cause damage to solar panels and other components in satellites above the atmosphere, by the heavy ionization trails they leave. However, at the earth's surface or at aircraft altitude it is known that cosmic rays, other than heavy ions, can cause soft errors in memory circuit components. Soft errors are those confusions between ones and zeroes that cause wrong contents to be stored in the memory, but without causing permanent damage to the circuit. As modern aircraft rely increasingly upon computerized and automated systems, these soft errors are important threats to safety. Protons, neutrons and pions resulting from high energy cosmic ray bombardment of the atmosphere pervade our environment. These particles do not induce damage directly by their ionization loss, but rather by reactions in the materials of the microcircuits. We have measured many cross sections for soft error upsets (SEU) in a broad range of commercial 16 Mb and 64 Mb dRAMs with accelerator beams. Here we define {sigma} SEU = induced errors/number of sample bits x particles/cm{sup 2}. We compare {sigma} SEU to find relations among results for these beams, and relations to reaction cross sections in order to systematize effects. We have modelled cosmic ray effects upon the components we have studied. (Author)

  7. High energy hadron-induced errors in memory chips

    International Nuclear Information System (INIS)

    Peterson, R.J.

    2001-01-01

    We have measured probabilities for proton, neutron and pion beams from accelerators to induce temporary or soft errors in a wide range of modern 16 Mb and 64 Mb dRAM memory chips, typical of those used in aircraft electronics. Relations among the cross sections for these particles are deduced, and failure rates for aircraft avionics due to cosmic rays are evaluated. Measurement of alpha pha particle yields from pions on aluminum, as a surrogate for silicon, indicate that these reaction products are the proximate cause of the charge deposition resulting in errors. Heavy ions can cause damage to solar panels and other components in satellites above the atmosphere, by the heavy ionization trails they leave. However, at the earth's surface or at aircraft altitude it is known that cosmic rays, other than heavy ions, can cause soft errors in memory circuit components. Soft errors are those confusions between ones and zeroes that cause wrong contents to be stored in the memory, but without causing permanent damage to the circuit. As modern aircraft rely increasingly upon computerized and automated systems, these soft errors are important threats to safety. Protons, neutrons and pions resulting from high energy cosmic ray bombardment of the atmosphere pervade our environment. These particles do not induce damage directly by their ionization loss, but rather by reactions in the materials of the microcircuits. We have measured many cross sections for soft error upsets (SEU) in a broad range of commercial 16 Mb and 64 Mb dRAMs with accelerator beams. Here we define σ SEU = induced errors/number of sample bits x particles/cm 2 . We compare σ SEU to find relations among results for these beams, and relations to reaction cross sections in order to systematize effects. We have modelled cosmic ray effects upon the components we have studied. (Author)

  8. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  9. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.

  10. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Science.gov (United States)

    Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn

    2013-01-01

    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID).

  11. Resident Physicians' Clinical Training and Error Rate: The Roles of Autonomy, Consultation, and Familiarity with the Literature

    Science.gov (United States)

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-01-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores…

  12. Closed-Loop Analysis of Soft Decisions for Serial Links

    Science.gov (United States)

    Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlesinger, Adam M.

    2013-01-01

    We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.

  13. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    Energy Technology Data Exchange (ETDEWEB)

    Boehnke, E McKenzie; DeMarco, J; Steers, J; Fraass, B [Cedars-Sinai Medical Center, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readings are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.

  14. Throughput Estimation Method in Burst ACK Scheme for Optimizing Frame Size and Burst Frame Number Appropriate to SNR-Related Error Rate

    Science.gov (United States)

    Ohteru, Shoko; Kishine, Keiji

    The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.

  15. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  16. Does an Algorithmic Approach to Using Brachytherapy and External Beam Radiation Result in Good Function, Local Control Rates, and Low Morbidity in Patients With Extremity Soft Tissue Sarcoma?

    Science.gov (United States)

    Klein, Jason; Ghasem, Alex; Huntley, Samuel; Donaldson, Nathan; Keisch, Martin; Conway, Sheila

    2018-03-01

    High-dose-rate brachytherapy (HDR-BT) and external-beam radiation therapy (EBRT) are two modalities used in the treatment of soft tissue sarcoma. Previous work at our institution showed early complications and outcomes for patients treated with HDR-BT, EBRT, or a combination of both radiation therapy modalities. As the general indications for each of these approaches to radiation therapy differ, it is important to evaluate the use of each in an algorithmic way, reflecting how they are used in contemporary practice at sites that use these treatments. QUESTION/PURPOSES: (1) To determine the proportions of intermediate- and long-term complications associated with the use of brachytherapy in the treatment of primary high-grade extremity soft tissue sarcomas; (2), to characterize the long-term morbidity of the three radiation treatment groups using the Radiation Therapy Oncology Group/ European Organization for Research and Treatment of Cancer (RTOG/EORTC) Late Radiation Morbidity Scoring Scheme; (3) to determine whether treatment with HDR-BT, EBRT, and HDR-BT+EBRT therapy, in combination with limb-salvage surgery, results in acceptable local control in this high-risk group of sarcomas. We retrospectively studied data from 171 patients with a diagnosis of high-grade extremity soft tissue sarcoma treated with limb-sparing surgery and radiation therapy between 1990 and 2012 at our institution, with a mean followup of 72 months. Of the 171 patients, 33 (20%) were treated with HDR-BT, 128 (75%) with EBRT, and 10 (6%) with HDR-BT+EBRT. We excluded 265 patients with soft tissue sarcomas owing to axial tumor location, previous radiation to the affected extremity, incomplete patient records, patients receiving primary amputation, recurrent tumors, pediatric patients, low- and intermediate-grade tumors, and rhabdoid histology. Fifteen patients (9%) were lost to followup for any reason including died of disease or other causes during the first 12 months postoperatively. This

  17. Who Do Hospital Physicians and Nurses Go to for Advice About Medications? A Social Network Analysis and Examination of Prescribing Error Rates.

    Science.gov (United States)

    Creswick, Nerida; Westbrook, Johanna Irene

    2015-09-01

    To measure the weekly medication advice-seeking networks of hospital staff, to compare patterns across professional groups, and to examine these in the context of prescribing error rates. A social network analysis was conducted. All 101 staff in 2 wards in a large, academic teaching hospital in Sydney, Australia, were surveyed (response rate, 90%) using a detailed social network questionnaire. The extent of weekly medication advice seeking was measured by density of connections, proportion of reciprocal relationships by reciprocity, number of colleagues to whom each person provided advice by in-degree, and perceptions of amount and impact of advice seeking between physicians and nurses. Data on prescribing error rates from the 2 wards were compared. Weekly medication advice-seeking networks were sparse (density: 7% ward A and 12% ward B). Information sharing across professional groups was modest, and rates of reciprocation of advice were low (9% ward A, 14% ward B). Pharmacists provided advice to most people, and junior physicians also played central roles. Senior physicians provided medication advice to few people. Many staff perceived that physicians rarely sought advice from nurses when prescribing, but almost all believed that an increase in communication between physicians and nurses about medications would improve patient safety. The medication networks in ward B had higher measures for density, reciprocation, and fewer senior physicians who were isolates. Ward B had a significantly lower rate of both procedural and clinical prescribing errors than ward A (0.63 clinical prescribing errors per admission [95%CI, 0.47-0.79] versus 1.81/ admission [95%CI, 1.49-2.13]). Medication advice-seeking networks among staff on hospital wards are limited. Hubs of advice provision include pharmacists, junior physicians, and senior nurses. Senior physicians are poorly integrated into medication advice networks. Strategies to improve the advice-giving networks between senior

  18. Soft leptogenesis

    International Nuclear Information System (INIS)

    D'Ambrosio, Giancarlo; Giudice, Gian F.; Raidal, Martti

    2003-01-01

    We study 'soft leptogenesis', a new mechanism of leptogenesis which does not require flavour mixing among the right-handed neutrinos. Supersymmetry soft-breaking terms give a small mass splitting between the CP-even and CP-odd right-handed sneutrino states of a single generation and provide a CP-violating phase sufficient to generate a lepton asymmetry. The mechanism is successful if the lepton-violating soft bilinear coupling is unconventionally (but not unnaturally) small. The values of the right-handed neutrino masses predicted by soft leptogenesis can be low enough to evade the cosmological gravitino problem

  19. Error Rates of M-PAM and M-QAM in Generalized Fading and Generalized Gaussian Noise Environments

    KAUST Repository

    Soury, Hamza

    2013-07-01

    This letter investigates the average symbol error probability (ASEP) of pulse amplitude modulation and quadrature amplitude modulation coherent signaling over flat fading channels subject to additive white generalized Gaussian noise. The new ASEP results are derived in a generic closed-form in terms of the Fox H function and the bivariate Fox H function for the extended generalized-K fading case. The utility of this new general closed-form is that it includes some special fading distributions, like the Generalized-K, Nakagami-m, and Rayleigh fading and special noise distributions such as Gaussian and Laplacian. Some of these special cases are also treated and are shown to yield simplified results.

  20. Effect of a health system's medical error disclosure program on gastroenterology-related claims rates and costs.

    Science.gov (United States)

    Adams, Megan A; Elmunzer, B Joseph; Scheiman, James M

    2014-04-01

    In 2001, the University of Michigan Health System (UMHS) implemented a novel medical error disclosure program. This study analyzes the effect of this program on gastroenterology (GI)-related claims and costs. This was a review of claims in the UMHS Risk Management Database (1990-2010), naming a gastroenterologist. Claims were classified according to pre-determined categories. Claims data, including incident date, date of resolution, and total liability dollars, were reviewed. Mean total liability incurred per claim in the pre- and post-implementation eras was compared. Patient encounter data from the Division of Gastroenterology was also reviewed in order to benchmark claims data with changes in clinical volume. There were 238,911 GI encounters in the pre-implementation era and 411,944 in the post-implementation era. A total of 66 encounters resulted in claims: 38 in the pre-implementation era and 28 in the post-implementation era. Of the total number of claims, 15.2% alleged delay in diagnosis/misdiagnosis, 42.4% related to a procedure, and 42.4% involved improper management, treatment, or monitoring. The reduction in the proportion of encounters resulting in claims was statistically significant (P=0.001), as was the reduction in time to claim resolution (1,000 vs. 460 days) (P<0.0001). There was also a reduction in the mean total liability per claim ($167,309 pre vs. $81,107 post, 95% confidence interval: 33682.5-300936.2 pre vs. 1687.8-160526.7 post). Implementation of a novel medical error disclosure program, promoting transparency and quality improvement, not only decreased the number of GI-related claims per patient encounter, but also dramatically shortened the time to claim resolution.

  1. Errors in Computing the Normalized Protein Catabolic Rate due to Use of Single-pool Urea Kinetic Modeling or to Omission of the Residual Kidney Urea Clearance.

    Science.gov (United States)

    Daugirdas, John T

    2017-07-01

    The protein catabolic rate normalized to body size (PCRn) often is computed in dialysis units to obtain information about protein ingestion. However, errors can manifest when inappropriate modeling methods are used. We used a variable volume 2-pool urea kinetic model to examine the percent errors in PCRn due to use of a 1-pool urea kinetic model or after omission of residual urea clearance (Kru). When a single-pool model was used, 2 sources of errors were identified. The first, dependent on the ratio of dialyzer urea clearance to urea distribution volume (K/V), resulted in a 7% inflation of the PCRn when K/V was in the range of 6 mL/min per L. A second, larger error appeared when Kt/V values were below 1.0 and was related to underestimation of urea distribution volume (due to overestimation of effective clearance) by the single-pool model. A previously reported prediction equation for PCRn was valid, but data suggest that it should be modified using 2-pool eKt/V and V coefficients instead of single-pool values. A third source of error, this one unrelated to use of a single-pool model, namely omission of Kru, was shown to result in an underestimation of PCRn, such that each ml/minute Kru per 35 L of V caused a 5.6% underestimate in PCRn. Marked overestimation of PCRn can result due to inappropriate use of a single-pool urea kinetic model, particularly when Kt/V <1.0 (as in short daily dialysis), or after omission of residual native kidney clearance. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  2. Reliability of perceived neighbourhood conditions and the effects of measurement error on self-rated health across urban and rural neighbourhoods.

    Science.gov (United States)

    Pruitt, Sandi L; Jeffe, Donna B; Yan, Yan; Schootman, Mario

    2012-04-01

    Limited psychometric research has examined the reliability of self-reported measures of neighbourhood conditions, the effect of measurement error on associations between neighbourhood conditions and health, and potential differences in the reliabilities between neighbourhood strata (urban vs rural and low vs high poverty). We assessed overall and stratified reliability of self-reported perceived neighbourhood conditions using five scales (social and physical disorder, social control, social cohesion, fear) and four single items (multidimensional neighbouring). We also assessed measurement error-corrected associations of these conditions with self-rated health. Using random-digit dialling, 367 women without breast cancer (matched controls from a larger study) were interviewed twice, 2-3 weeks apart. Test-retest (intraclass correlation coefficients (ICC)/weighted κ) and internal consistency reliability (Cronbach's α) were assessed. Differences in reliability across neighbourhood strata were tested using bootstrap methods. Regression calibration corrected estimates for measurement error. All measures demonstrated satisfactory internal consistency (α ≥ 0.70) and either moderate (ICC/κ=0.41-0.60) or substantial (ICC/κ=0.61-0.80) test-retest reliability in the full sample. Internal consistency did not differ by neighbourhood strata. Test-retest reliability was significantly lower among rural (vs urban) residents for two scales (social control, physical disorder) and two multidimensional neighbouring items; test-retest reliability was higher for physical disorder and lower for one multidimensional neighbouring item among the high (vs low) poverty strata. After measurement error correction, the magnitude of associations between neighbourhood conditions and self-rated health were larger, particularly in the rural population. Research is needed to develop and test reliable measures of perceived neighbourhood conditions relevant to the health of rural populations.

  3. Application of Fermat's Principle to Calculation of the Errors of Acoustic Flow-Rate Measurements for a Three-Dimensional Fluid Flow or Gas

    Science.gov (United States)

    Petrov, A. G.; Shkundin, S. Z.

    2018-01-01

    Fermat's variational principle is used for derivation of the formula for the time of propagation of a sonic signal between two set points A and B in a steady three-dimensional flow of a fluid or gas. It is shown that the fluid flow changes the time of signal reception by a value proportional to the flow rate independently of the velocity profile. The time difference in the reception of the signals from point B to point A and vice versa is proportional with a high accuracy to the flow rate. It is shown that the relative error of the formula does not exceed the square of the largest Mach number. This makes it possible to measure the flow rate of a fluid or gas with an arbitrary steady subsonic velocity field.

  4. Why are autopsy rates low in Japan? Views of ordinary citizens and doctors in the case of unexpected patient death and medical error.

    Science.gov (United States)

    Maeda, Shoichi; Kamishiraki, Etsuko; Starkey, Jay; Ikeda, Noriaki

    2013-01-01

    This article examines what could account for the low autopsy rate in Japan based on the findings from an anonymous, self-administered, structured questionnaire that was given to a sample population of the general public and physicians in Japan. The general public and physicians indicated that autopsy may not be carried out because: (1) conducting an autopsy might result in the accusation that patient death was caused by a medical error even when there was no error (50.4% vs. 13.1%, respectively), (2) suggesting an autopsy makes the families suspicious of a medical error even when there was none (61.0% vs. 19.1%, respectively), (3) families do not want the body to be damaged by autopsy (81.6% vs. 87.3%, respectively), and (4) families do not want to make the patient suffer any more in addition to what he/she has already endured (61.8% vs. 87.1%, respectively). © 2013 American Society for Healthcare Risk Management of the American Hospital Association.

  5. An evaluation of a Low-Dose-Rate (LDR) brachytherapy procedure using a systems engineering & error analysis methodology for health care (SEABH) - (SAVE)

    LENUS (Irish Health Repository)

    Chadwick, Liam

    2012-03-12

    Health Care Failure Modes and Effects Analysis (HFMEA®) is an established tool for risk assessment in health care. A number of deficiencies have been identified in the method. A new method called Systems and Error Analysis Bundle for Health Care (SEABH) was developed to address these deficiencies. SEABH has been applied to a number of medical processes as part of its validation and testing. One of these, Low Dose Rate (LDR) prostate Brachytherapy is reported in this paper. The case study supported the validity of SEABH with respect to its capacity to address the weaknesses of (HFMEA®).

  6. Choice of reference sequence and assembler for alignment of Listeria monocytogenes short-read sequence data greatly influences rates of error in SNP analyses.

    Directory of Open Access Journals (Sweden)

    Arthur W Pightling

    Full Text Available The wide availability of whole-genome sequencing (WGS and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i depth of sequencing coverage, ii choice of reference-guided short-read sequence assembler, iii choice of reference genome, and iv whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT, using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming. We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers

  7. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  8. Increased error rates in preliminary reports issued by radiology residents working more than 10 consecutive hours overnight.

    Science.gov (United States)

    Ruutiainen, Alexander T; Durand, Daniel J; Scanlon, Mary H; Itri, Jason N

    2013-03-01

    To determine if the rate of major discrepancies between resident preliminary reports and faculty final reports increases during the final hours of consecutive 12-hour overnight call shifts. Institutional review board exemption status was obtained for this study. All overnight radiology reports interpreted by residents on-call between January 2010 and June 2010 were reviewed by board-certified faculty and categorized as major discrepancies if they contained a change in interpretation with the potential to impact patient management or outcome. Initial determination of a major discrepancy was at the discretion of individual faculty radiologists based on this general definition. Studies categorized as major discrepancies were secondarily reviewed by the residency program director (M.H.S.) to ensure consistent application of the major discrepancy designation. Multiple variables associated with each report were collected and analyzed, including the time of preliminary interpretation, time into shift study was interpreted, volume of studies interpreted during each shift, day of the week, patient location (inpatient or emergency department), block of shift (2-hour blocks for 12-hour shifts), imaging modality, patient age and gender, resident identification, and faculty identification. Univariate risk factor analysis was performed to determine the optimal data format of each variable (ie, continuous versus categorical). A multivariate logistic regression model was then constructed to account for confounding between variables and identify independent risk factors for major discrepancies. We analyzed 8062 preliminary resident reports with 79 major discrepancies (1.0%). There was a statistically significant increase in major discrepancy rate during the final 2 hours of consecutive 12-hour call shifts. Multivariate analysis confirmed that interpretation during the last 2 hours of 12-hour call shifts (odds ratio (OR) 1.94, 95% confidence interval (CI) 1.18-3.21), cross

  9. The sensitivity of bit error rate (BER) performance in multi-carrier (OFDM) and single-carrier

    Science.gov (United States)

    Albdran, Saleh; Alshammari, Ahmed; Matin, Mohammad

    2012-10-01

    Recently, the single-carrier and multi-carrier transmissions have grabbed the attention of industrial systems. Theoretically, OFDM as a Multicarrier has more advantages over the Single-Carrier especially for high data rate. In this paper we will show which one of the two techniques outperforms the other. We will study and compare the performance of BER for both techniques for a given channel. As a function of signal to noise ratio SNR, the BER will be measure and studied. Also, Peak-to-Average Power Ratio (PAPR) is going to be examined and presented as a drawback of using OFDM. To make a reasonable comparison between the both techniques, we will use additive white Gaussian noise (AWGN) as a communication channel.

  10. Evaluation of the effect of noise on the rate of errors and speed of work by the ergonomic test of two-hand co-ordination

    Directory of Open Access Journals (Sweden)

    Ehsanollah Habibi

    2013-01-01

    Full Text Available Background: Among the most important and effective factors affecting the efficiency of the human workforce are accuracy, promptness, and ability. In the context of promoting levels and quality of productivity, the aim of this study was to investigate the effects of exposure to noise on the rate of errors, speed of work, and capability in performing manual activities. Methods: This experimental study was conducted on 96 students (52 female and 44 male of the Isfahan Medical Science University with the average and standard deviations of age, height, and weight of 22.81 (3.04 years, 171.67 (8.51 cm, and 65.05 (13.13 kg, respectively. Sampling was conducted with a randomized block design. Along with controlling for intervening factors, a combination of sound pressure levels [65 dB (A, 85 dB (A, and 95 dB (A] and exposure times (0, 20, and 40 were used for evaluation of precision and speed of action of the participants, in the ergonomic test of two-hand coordination. Data was analyzed by SPSS18 software using a descriptive and analytical statistical method by analysis of covariance (ANCOVA repeated measures. Results: The results of this study showed that increasing sound pressure level from 65 to 95 dB in network ′A′ increased the speed of work (P 0.05. Male participants got annoyed from the noise more than females. Also, increase in sound pressure level increased the rate of error (P < 0.05. Conclusions: According to the results of this research, increasing the sound pressure level decreased efficiency and increased the errors and in exposure to sounds less than 85 dB in the beginning, the efficiency decreased initially and then increased in a mild slope.

  11. Soft Robotics.

    Science.gov (United States)

    Whitesides, George M

    2018-04-09

    This description of "soft robotics" is not intended to be a conventional review, in the sense of a comprehensive technical summary of a developing field. Rather, its objective is to describe soft robotics as a new field-one that offers opportunities to chemists and materials scientists who like to make "things" and to work with macroscopic objects that move and exert force. It will give one (personal) view of what soft actuators and robots are, and how this class of soft devices fits into the more highly developed field of conventional "hard" robotics. It will also suggest how and why soft robotics is more than simply a minor technical "tweak" on hard robotics and propose a unique role for chemistry, and materials science, in this field. Soft robotics is, at its core, intellectually and technologically different from hard robotics, both because it has different objectives and uses and because it relies on the properties of materials to assume many of the roles played by sensors, actuators, and controllers in hard robotics. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Soft lubrication

    Science.gov (United States)

    Skotheim, Jan; Mahadevan, Laksminarayanan

    2004-11-01

    We study the lubrication of fluid-immersed soft interfaces and show that elastic deformation couples tangential and normal forces and thus generates lift. We consider materials that deform easily, due to either geometry (e.g a shell) or constitutive properties (e.g. a gel or a rubber), so that the effects of pressure and temperature on the fluid properties may be neglected. Four different system geometries are considered: a rigid cylinder moving tangentially to a soft layer coating a rigid substrate; a soft cylinder moving tangentially to a rigid substrate; a cylindrical shell moving tangentially to a rigid substrate; and finally a journal bearing coated with a thin soft layer, which being a conforming contact allows us to gauge the influence of contact geometry. In addition, for the particular case of a soft layer coating a rigid substrate we consider both elastic and poroelastic material responses. Finally, we consider the role of contact geometry in the context of the journal bearing, a conforming contact. For all these cases we find the same generic behavior: there is an optimal combination of geometric and material parameters that maximizes the dimensionless normal force as a function of the softness.

  13. Downlink Error Rates of Half-duplex Users in Full-duplex Networks over a Laplacian Inter-User Interference Limited and EGK fading

    KAUST Repository

    Soury, Hamza

    2017-03-14

    This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have non-line of sight (NLOS) interfering link. Consequently, we study the interferer limited problem that appears between NLOS HD users-pair that are scheduled on the same FD channel. The distribution of the interference is first characterized via its distribution function, which is derived in closed form. Then, a comprehensive performance assessment for the proposed pairing scheme is provided by assuming Extended Generalized- $cal{K}$ (EGK) fading for the downlink and studying different modulation schemes. To this end, a unified closed form expression for the average symbol error rate is derived. Furthermore, we show the effective downlink throughput gain harvested by the pairing NLOS users as a function of the average signal-to-interferenceratio when compared to an idealized HD scenario with neither interference nor noise. Finally, we show the minimum required channel gain pairing threshold to harvest downlink throughput via the FD operation when compared to the HD case for each modulation scheme.

  14. Improved read disturb and write error rates in voltage-control spintronics memory (VoCSM) by controlling energy barrier height

    Science.gov (United States)

    Inokuchi, T.; Yoda, H.; Kato, Y.; Shimizu, M.; Shirotori, S.; Shimomura, N.; Koi, K.; Kamiguchi, Y.; Sugiyama, H.; Oikawa, S.; Ikegami, K.; Ishikawa, M.; Altansargai, B.; Tiwari, A.; Ohsawa, Y.; Saito, Y.; Kurobe, A.

    2017-06-01

    A hybrid writing scheme that combines the spin Hall effect and voltage-controlled magnetic-anisotropy effect is investigated in Ta/CoFeB/MgO/CoFeB/Ru/CoFe/IrMn junctions. The write current and control voltage are applied to Ta and CoFeB/MgO/CoFeB junctions, respectively. The critical current density required for switching the magnetization in CoFeB was modulated 3.6-fold by changing the control voltage from -1.0 V to +1.0 V. This modulation of the write current density is explained by the change in the surface anisotropy of the free layer from 1.7 mJ/m2 to 1.6 mJ/m2, which is caused by the electric field applied to the junction. The read disturb rate and write error rate, which are important performance parameters for memory applications, are drastically improved, and no error was detected in 5 × 108 cycles by controlling read and write sequences.

  15. Global minimum profile error (GMPE) - a least-squares-based approach for extracting macroscopic rate coefficients for complex gas-phase chemical reactions.

    Science.gov (United States)

    Duong, Minh V; Nguyen, Hieu T; Mai, Tam V-T; Huynh, Lam K

    2018-01-03

    Master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) has shown to be a powerful framework for modeling kinetic and dynamic behaviors of a complex gas-phase chemical system on a complicated multiple-species and multiple-channel potential energy surface (PES) for a wide range of temperatures and pressures. Derived from the ME time-resolved species profiles, the macroscopic or phenomenological rate coefficients are essential for many reaction engineering applications including those in combustion and atmospheric chemistry. Therefore, in this study, a least-squares-based approach named Global Minimum Profile Error (GMPE) was proposed and implemented in the MultiSpecies-MultiChannel (MSMC) code (Int. J. Chem. Kinet., 2015, 47, 564) to extract macroscopic rate coefficients for such a complicated system. The capability and limitations of the new approach were discussed in several well-defined test cases.

  16. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  17. Propagation of measurement accuracy to biomass soft-sensor estimation and control quality.

    Science.gov (United States)

    Steinwandter, Valentin; Zahel, Thomas; Sagmeister, Patrick; Herwig, Christoph

    2017-01-01

    In biopharmaceutical process development and manufacturing, the online measurement of biomass and derived specific turnover rates is a central task to physiologically monitor and control the process. However, hard-type sensors such as dielectric spectroscopy, broth fluorescence, or permittivity measurement harbor various disadvantages. Therefore, soft-sensors, which use measurements of the off-gas stream and substrate feed to reconcile turnover rates and provide an online estimate of the biomass formation, are smart alternatives. For the reconciliation procedure, mass and energy balances are used together with accuracy estimations of measured conversion rates, which were so far arbitrarily chosen and static over the entire process. In this contribution, we present a novel strategy within the soft-sensor framework (named adaptive soft-sensor) to propagate uncertainties from measurements to conversion rates and demonstrate the benefits: For industrially relevant conditions, hereby the error of the resulting estimated biomass formation rate and specific substrate consumption rate could be decreased by 43 and 64 %, respectively, compared to traditional soft-sensor approaches. Moreover, we present a generic workflow to determine the required raw signal accuracy to obtain predefined accuracies of soft-sensor estimations. Thereby, appropriate measurement devices and maintenance intervals can be selected. Furthermore, using this workflow, we demonstrate that the estimation accuracy of the soft-sensor can be additionally and substantially increased.

  18. Soft-contact conductive carbon enabling depolarization of LiFePO4 cathodes to enhance both capacity and rate performances of lithium ion batteries

    Science.gov (United States)

    Ren, Wenju; Wang, Kai; Yang, Jinlong; Tan, Rui; Hu, Jiangtao; Guo, Hua; Duan, Yandong; Zheng, Jiaxin; Lin, Yuan; Pan, Feng

    2016-11-01

    Conductive nanocarbons generally are used as the electronic conductive additives to contact with active materials to generate conductive network for electrodes of commercial Li-ion batteries (LIBs). A typical of LiFePO4 (LFP), which has been widely used as cathode material for LIBs with low electronic conductivity, needs higher quantity of conductive nanocarbons to enhance the performance for cathode electrodes. In this work, we systematically studied three types of conductive nanocarbons and related performances in the LFP electrodes, and classify them as hard/soft-contact conductive carbon (named as H/SCC), respectively, according to their crystallite size, surface graphite-defect, specific surface area and porous structure, in which SCC can generate much larger contact area with active nano-particles of cathode materials than that of HCC. It is found that LFP nanocrystals wrapped in SCC networks perform significantly enhanced both capacity and rate performance than that in HCC. Combined experiments with multiphysics simulation, the mechanism is that LFP nanoparticles embedded in SCC with large contact area enable to generate higher depolarized effects with a relatively uniform current density vector (is) and lithium flux vector (NLi) than that in HCC. This discovery will guide us to how to design LIBs by selective using conductive carbon for high-performance LIBs.

  19. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    Science.gov (United States)

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The

  20. The human error rate assessment and optimizing system HEROS - a new procedure for evaluating and optimizing the man-machine interface in PSA

    International Nuclear Information System (INIS)

    Richei, A.; Hauptmanns, U.; Unger, H.

    2001-01-01

    A new procedure allowing the probabilistic evaluation and optimization of the man-machine system is presented. This procedure and the resulting expert system HEROS, which is an acronym for Human Error Rate Assessment and Optimizing System, is based on the fuzzy set theory. Most of the well-known procedures employed for the probabilistic evaluation of human factors involve the use of vague linguistic statements on performance shaping factors to select and to modify basic human error probabilities from the associated databases. This implies a large portion of subjectivity. Vague statements are expressed here in terms of fuzzy numbers or intervals which allow mathematical operations to be performed on them. A model of the man-machine system is the basis of the procedure. A fuzzy rule-based expert system was derived from ergonomic and psychological studies. Hence, it does not rely on a database, whose transferability to situations different from its origin is questionable. In this way, subjective elements are eliminated to a large extent. HEROS facilitates the importance analysis for the evaluation of human factors, which is necessary for optimizing the man-machine system. HEROS is applied to the analysis of a simple diagnosis of task of the operating personnel in a nuclear power plant

  1. Soft Clouding

    DEFF Research Database (Denmark)

    Søndergaard, Morten; Markussen, Thomas; Wetton, Barnabas

    2012-01-01

    Soft Clouding is a blended concept, which describes the aim of a collaborative and transdisciplinary project. The concept is a metaphor implying a blend of cognitive, embodied interaction and semantic web. Furthermore, it is a metaphor describing our attempt of curating a new semantics of sound...... archiving. The Soft Clouding Project is part of LARM - a major infrastructure combining research in and access to sound and radio archives in Denmark. In 2012 the LARM infrastructure will consist of more than 1 million hours of radio, combined with metadata who describes the content. The idea is to analyse...... the concept of ‘infrastructure’ and ‘interface’ on a creative play with the fundamentals of LARM (and any sound archive situation combining many kinds and layers of data and sources). This paper will present and discuss the Soft clouding project from the perspective of the three practices and competencies...

  2. Bit Error Rate Performance of a MIMO-CDMA System Employing Parity-Bit-Selected Spreading in Frequency Nonselective Rayleigh Fading

    Directory of Open Access Journals (Sweden)

    Claude D'Amours

    2011-01-01

    Full Text Available We analytically derive the upper bound for the bit error rate (BER performance of a single user multiple input multiple output code division multiple access (MIMO-CDMA system employing parity-bit-selected spreading in slowly varying, flat Rayleigh fading. The analysis is done for spatially uncorrelated links. The analysis presented demonstrates that parity-bit-selected spreading provides an asymptotic gain of 10log(Nt dB over conventional MIMO-CDMA when the receiver has perfect channel estimates. This analytical result concurs with previous works where the (BER is determined by simulation methods and provides insight into why the different techniques provide improvement over conventional MIMO-CDMA systems.

  3. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  4. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    Science.gov (United States)

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  5. Soft electronics for soft robotics

    Science.gov (United States)

    Kramer, Rebecca K.

    2015-05-01

    As advanced as modern machines are, the building blocks have changed little since the industrial revolution, leading to rigid, bulky, and complex devices. Future machines will include electromechanical systems that are soft and elastically deformable, lending them to applications such as soft robotics, wearable/implantable devices, sensory skins, and energy storage and transport systems. One key step toward the realization of soft systems is the development of stretchable electronics that remain functional even when subject to high strains. Liquid-metal traces embedded in elastic polymers present a unique opportunity to retain the function of rigid metal conductors while leveraging the deformable properties of liquid-elastomer composites. However, in order to achieve the potential benefits of liquid-metal, scalable processing and manufacturing methods must be identified.

  6. Earthquake-induced soft-sediment deformations and seismically amplified erosion rates recorded in varved sediments of Köyceğiz Lake (SW Turkey)

    KAUST Repository

    Avsar, Ulas

    2016-06-06

    Earthquake-triggered landslides amplify erosion rates in catchments, i.e. catchment response to seismic shocks (CR). In addition to historical eyewitness accounts of muddy rivers implying CRs after large earthquakes, several studies have quantitatively reported increased sediment concentrations in rivers after earthquakes. However, only a few paleolimnological studies could detect CRs within lacustrine sedimentary sequences as siliciclastic-enriched intercalations within background sedimentation. Since siliciclastic-enriched intercalations can easily be of non-seismic origin, their temporal correlation with nearby earthquakes is crucial to assign a seismic triggering mechanism. In most cases, either uncertainties in dating methods or the lack of recent seismic activity has prevented reliable temporal correlations, making the seismic origin of observed sedimentary events questionable. Here, we attempt to remove this question mark by presenting sedimentary traces of CRs in the 370-year-long varved sequence of Köyceğiz Lake (SW Turkey) that we compare with estimated peak ground acceleration (PGA) values of several nearby earthquakes. We find that earthquakes exceeding estimated PGA values of ca. 20 cm/s2 can induce soft-sediment deformations (SSD), while CRs seem only to be triggered by PGA levels higher than 70 cm/s2. In Köyceğiz Lake, CRs produce Cr- and Ni-enriched sedimentation due to the seismically mobilized soils derived from ultramafic rocks in the catchment. Given the varve chronology, the residence time of the seismically mobilized material in the catchment is determined to be 5 to 10 years.

  7. Comparison of soft-input-soft-output detection methods for dual-polarized quadrature duobinary system

    Science.gov (United States)

    Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan

    2018-02-01

    Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.

  8. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    Science.gov (United States)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  9. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  10. Earthquake-induced soft-sediment deformations and seismically amplified erosion rates recorded in varved sediments of Köyceğiz Lake (SW Turkey)

    KAUST Repository

    Avsar, Ulas; Jonsson, Sigurjon; Avşar, Ö zgü r; Schmidt, Sabine

    2016-01-01

    sequence of Köyceğiz Lake (SW Turkey) that we compare with estimated peak ground acceleration (PGA) values of several nearby earthquakes. We find that earthquakes exceeding estimated PGA values of ca. 20 cm/s2 can induce soft-sediment deformations (SSD

  11. Soft-X-Ray Projection Lithography Using a High-Repetition-Rate Laser-Induced X-Ray Source for Sub-100 Nanometer Lithography Processes

    NARCIS (Netherlands)

    E. Louis,; F. Bijkerk,; Shmaenok, L.; Voorma, H. J.; van der Wiel, M. J.; Schlatmann, R.; Verhoeven, J.; van der Drift, E. W. J. M.; Romijn, J.; Rousseeuw, B. A. C.; Voss, F.; Desor, R.; Nikolaus, B.

    1993-01-01

    In this paper we present the status of a joint development programme on soft x-ray projection lithography (SXPL) integrating work on high brightness laser plasma sources. fabrication of multilayer x-ray mirrors. and patterning of reflection masks. We are in the process of optimization of a

  12. Soft Robotics Week

    CERN Document Server

    Rossiter, Jonathan; Iida, Fumiya; Cianchetti, Matteo; Margheri, Laura

    2017-01-01

    This book offers a comprehensive, timely snapshot of current research, technologies and applications of soft robotics. The different chapters, written by international experts across multiple fields of soft robotics, cover innovative systems and technologies for soft robot legged locomotion, soft robot manipulation, underwater soft robotics, biomimetic soft robotic platforms, plant-inspired soft robots, flying soft robots, soft robotics in surgery, as well as methods for their modeling and control. Based on the results of the second edition of the Soft Robotics Week, held on April 25 – 30, 2016, in Livorno, Italy, the book reports on the major research lines and novel technologies presented and discussed during the event.

  13. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    Science.gov (United States)

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Scintillation and bit error rate analysis of a phase-locked partially coherent flat-topped array laser beam in oceanic turbulence.

    Science.gov (United States)

    Yousefi, Masoud; Kashani, Fatemeh Dabbagh; Golmohammady, Shole; Mashal, Ahmad

    2017-12-01

    In this paper, the performance of underwater wireless optical communication (UWOC) links, which is made up of the partially coherent flat-topped (PCFT) array laser beam, has been investigated in detail. Providing high power, array laser beams are employed to increase the range of UWOC links. For characterization of the effects of oceanic turbulence on the propagation behavior of the considered beam, using the extended Huygens-Fresnel principle, an analytical expression for cross-spectral density matrix elements and a semi-analytical one for fourth-order statistical moment have been derived. Then, based on these expressions, the on-axis scintillation index of the mentioned beam propagating through weak oceanic turbulence has been calculated. Furthermore, in order to quantify the performance of the UWOC link, the average bit error rate (BER) has also been evaluated. The effects of some source factors and turbulent ocean parameters on the propagation behavior of the scintillation index and the BER have been studied in detail. The results of this investigation indicate that in comparison with the Gaussian array beam, when the source size of beamlets is larger than the first Fresnel zone, the PCFT array laser beam with the higher flatness order is found to have a lower scintillation index and hence lower BER. Specifically, in the sense of scintillation index reduction, using the PCFT array laser beams has a considerable benefit in comparison with the single PCFT or Gaussian laser beams and also Gaussian array beams. All the simulation results of this paper have been shown by graphs and they have been analyzed in detail.

  15. Ventilator-associated pneumonia: the influence of bacterial resistance, prescription errors, and de-escalation of antimicrobial therapy on mortality rates

    Directory of Open Access Journals (Sweden)

    Ana Carolina Souza-Oliveira

    2016-09-01

    Conclusion: Prescription errors influenced mortality of patients with Ventilator-associated pneumonia, underscoring the challenge of proper Ventilator-associated pneumonia treatment, which requires continuous reevaluation to ensure that clinical response to therapy meets expectations.

  16. Explaining quantitative variation in the rate of Optional Infinitive errors across languages: a comparison of MOSAIC and the Variational Learning Model.

    Science.gov (United States)

    Freudenthal, Daniel; Pine, Julian; Gobet, Fernand

    2010-06-01

    In this study, we use corpus analysis and computational modelling techniques to compare two recent accounts of the OI stage: Legate & Yang's (2007) Variational Learning Model and Freudenthal, Pine & Gobet's (2006) Model of Syntax Acquisition in Children. We first assess the extent to which each of these accounts can explain the level of OI errors across five different languages (English, Dutch, German, French and Spanish). We then differentiate between the two accounts by testing their predictions about the relation between children's OI errors and the distribution of infinitival verb forms in the input language. We conclude that, although both accounts fit the cross-linguistic patterning of OI errors reasonably well, only MOSAIC is able to explain why verbs that occur more frequently as infinitives than as finite verb forms in the input also occur more frequently as OI errors than as correct finite verb forms in the children's output.

  17. Progressive and Error-Resilient Transmission Strategies for VLC Encoded Signals over Noisy Channels

    Directory of Open Access Journals (Sweden)

    Guillemot Christine

    2006-01-01

    Full Text Available This paper addresses the issue of robust and progressive transmission of signals (e.g., images, video encoded with variable length codes (VLCs over error-prone channels. This paper first describes bitstream construction methods offering good properties in terms of error resilience and progressivity. In contrast with related algorithms described in the literature, all proposed methods have a linear complexity as the sequence length increases. The applicability of soft-input soft-output (SISO and turbo decoding principles to resulting bitstream structures is investigated. In addition to error resilience, the amenability of the bitstream construction methods to progressive decoding is considered. The problem of code design for achieving good performance in terms of error resilience and progressive decoding with these transmission strategies is then addressed. The VLC code has to be such that the symbol energy is mainly concentrated on the first bits of the symbol representation (i.e., on the first transitions of the corresponding codetree. Simulation results reveal high performance in terms of symbol error rate (SER and mean-square reconstruction error (MSE. These error-resilience and progressivity properties are obtained without any penalty in compression efficiency. Codes with such properties are of strong interest for the binarization of -ary sources in state-of-the-art image, and video coding systems making use of, for example, the EBCOT or CABAC algorithms. A prior statistical analysis of the signal allows the construction of the appropriate binarization code.

  18. Soft systems methodology: other voices

    OpenAIRE

    Holwell, Sue

    2000-01-01

    This issue of Systemic Practice and Action Research, celebrating the work of Peter Checkland, in the particular nature and development of soft systems methodology (SSM), would not have happened unless the work was seen by others as being important. No significant contribution to thinking happens without a secondary literature developing. Not surprisingly, many commentaries have accompanied the ongoing development of SSM. Some of these are insightful, some full of errors, and some include both...

  19. Soft Tissue Sarcoma

    Science.gov (United States)

    ... muscles, tendons, fat, and blood vessels. Soft tissue sarcoma is a cancer of these soft tissues. There ... have certain genetic diseases. Doctors diagnose soft tissue sarcomas with a biopsy. Treatments include surgery to remove ...

  20. Soft biometrics in conjunction with optics based biohashing

    Science.gov (United States)

    Saini, Nirmala; Sinha, Aloka

    2011-02-01

    Biometric systems are gaining importance because of increased reliability for authentication and identification. A biometric recognition technique has been proposed earlier, in which biohashing code has been generated by using a joint transform correlator. The main drawback of the base biohashing method is the low performance of the technique when an "impostor" steals the pseudo-random numbers of the genuine and tries to authenticate as genuine. In the proposed technique, soft biometrics of the same person has been used to improve the discrimination between the genuine and the impostor populations. The soft biometrics are those characteristics that provide some information about the individual, but lack the distinctiveness and permanence to sufficiently differentiate between any two individuals. In the enrolment process, biohash code of the target face images has been integrated with the different soft biometrics of the same person. The obtained code has been stored for verification. In the verification process, biohash code of the face image to be verified is again diffused with the soft biometric of the person. The obtained code is matched with the stored code of the target. The receiving operating characteristic (ROC) curve and the equal error rate (EER) have been used to evaluate the performance of the technique. A detailed study has been carried out to find out the optimum values of the weighting factor for the diffusion process.

  1. Medicaid/CHIP Program; Medicaid Program and Children's Health Insurance Program (CHIP); Changes to the Medicaid Eligibility Quality Control and Payment Error Rate Measurement Programs in Response to the Affordable Care Act. Final rule.

    Science.gov (United States)

    2017-07-05

    This final rule updates the Medicaid Eligibility Quality Control (MEQC) and Payment Error Rate Measurement (PERM) programs based on the changes to Medicaid and the Children's Health Insurance Program (CHIP) eligibility under the Patient Protection and Affordable Care Act. This rule also implements various other improvements to the PERM program.

  2. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    Science.gov (United States)

    2014-04-01

    Integral Role in Soft Tissue Mechanics, K. Troyer, D. Estep, and C. Puttlitz, Acta Biomaterialia 8 (201 2), 234-244 • A posteriori analysis of multi rate...2013, submitted • A posteriori error estimation for the Lax -Wendroff finite difference scheme, J. B. Collins, D. Estep, and S. Tavener, Journal of...oped over neArly six decades of activity and the major developments form a highly inter- connected web. We do not. ətternpt to review the history of

  3. Soft energy

    International Nuclear Information System (INIS)

    Lovins, A.B.

    1978-01-01

    A compact energy concept opposes the existing development course of energy supply. This concept does without projects for opening-up oil and gas occurrences in the Arctic and in offshore seas, and also without a further extension of nuclear energy. Energy consumption is to be stabilized in the long-run on today's level by a utilization of energy which is to be substantially improved in a technical and economic respect. Oil and gas are to be replaced by 'soft', regenerative, mainly decentralized energy sources, in the course of about 30 years time. Solar energy is to be used for heating and service water, biogas as motor fuel being generated primarily from reference which will come from agriculture and forestry. Wind and hydroelectric power are to be used for generating electricity. In the first part, concepts for the present and future energy policy are discussed, in the second part, a lot of figures are given, supporting the respective arguments. In the third part the relationships between social and energy-economic developments are pointed out. (UA) [de

  4. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    Science.gov (United States)

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  5. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  6. Protein synthesis rate measured with l-[1-11C]tyrosine positron emission tomography correlates with mitotic activity and MIB-1 antibody-detected proliferation in human soft tissue sarcomas

    International Nuclear Information System (INIS)

    Plaat, B.; Mastik, M.; Molenaar, W.; Kole, A.; Vaalburg, W.; Hoekstra, H.

    1999-01-01

    Protein synthesis rate (PSR) can be assessed in vivo using positron emission tomography with l-[1- 11 C]tyrosine (TYR-PET). Biological activity of soft tissue sarcomas (STS) can be measured in vitro by the mitotic rate and number of proliferating cells. In STS the grade of malignancy, in which the mitotic index plays a major role, is considered to be the major standard in predicting biological tumour behaviour. This study was designed to test the validity of TYR-PET in relation to different histopathological features. In 21 patients with untreated STS, the PSR was measured using TYR-PET. The number of mitoses was counted and tumours were graded according to the grading system of Coindre et al. (Cancer 1986; 58:306-309). Proliferative activity was assessed by immunohistological detection of the Ki-67 nuclear antigen using MIB-1 monoclonal antibody. To test the association between the PSR and these tumour parameters, a correlation analysis was performed. A significant (P<0.05) correlation was found between PSR and the Ki-67 proliferation index (R = 0.54), and between PSR and mitotic rate (R = 0.64). There was no correlation between PSR and tumour grade. The present study in malignant soft tissue tumours relates in vivo tumour metabolism as established with TYR-PET to tumour activity measured in vitro and indicates that the non-invasive method of TYR-PET can estimate the mitotic and proliferative activity in STS. (orig.)

  7. Soft Interfaces

    International Nuclear Information System (INIS)

    Strzalkowski, Ireneusz

    1997-01-01

    This book presents an extended form of the 1994 Dirac Memorial Lecture delivered by Pierre Gilles de Gennes at Cambridge University. The main task of the presentation is to show the beauty and richness of structural forms and phenomena which are observed at soft interfaces between two media. They are much more complex than forms and phenomena existing in each phase separately. Problems are discussed including both traditional, classical techniques, such as the contact angle in static and dynamic partial wetting, as well as the latest research methodology, like 'environmental' scanning electron microscopes. The book is not a systematic lecture on phenomena but it can be considered as a compact set of essays on topics which particularly fascinate the author. The continuum theory widely used in the book is based on a deep molecular approach. The author is particularly interested in a broad-minded rheology of liquid systems at interfaces with specific emphasis on polymer melts. To study this, the author has developed a special methodology called anemometry near walls. The second main topic presented in the book is the problem of adhesion. Molecular processes, energy transformations and electrostatic interaction are included in an interesting discussion of the many aspects of the principles of adhesion. The third topic concerns welding between two polymer surfaces, such as A/A and A/B interfaces. Of great worth is the presentation of various unsolved, open problems. The kind of topics and brevity of description indicate that this book is intended for a well prepared reader. However, for any reader it will present an interesting picture of how many mysterious processes are acting in the surrounding world and how these phenomena are perceived by a Nobel Laureate, who won that prize mainly for his investigations in this field. (book review)

  8. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  9. Soft-sediment mullions

    Science.gov (United States)

    Ortner, Hugo

    2015-04-01

    mullions form. In coarse conglomerates, meter-scale mullions were observed, in sandstones centimeter-scale mullions. There does not seem to exist a relationship to the rate of shortening, as the size of mullions is independent of their position in larger scale folds, or in slump complexes or tectonic folds. Anketell, J.M., Cegla, J. & Dzulynsky, S. (1970): On the deformational structures in systems with reversed density gradients. Ann. Soc. Geol. Pol., 40(1): 3-30. Alsop, G.I., Marco, S., 2014. Fold and fabric relationships in temporally and spatially evolving slump systems: A multi-cell flow model. Jour. Struct. Geol., 63(0): 27-49. Dzulynsky, S. (1966): Sedimentary structures resulting from convection-like pattern of motion. Ann. Soc. Geol. Pol., 36(1): 3-21. Dzulinsky, S. & Simpson, F. (1966): Experiments on interfacial current markings. Geol. Rom., 5: 197 - 214. Ortner, H. (2007): Styles of soft-sediment deformation on top of a growing fold system in the Gosau Group at Muttekopf, Northern Calcareous Alps, Austria: Slumping versus tectonic deformation. Sed. Geol., 196: 99-118. Urai, J.L., Spaeth, G., Van der Zee, W. & Hilger, C. (2001): Evolution of mullion (boudin) structures in the Variscan of the Ardennes and Eifel. Jour. Virt. Expl., 3: 1-16.

  10. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  11. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  12. Random access to mobile networks with advanced error correction

    Science.gov (United States)

    Dippold, Michael

    1990-01-01

    A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.

  13. Measurements of the growth rate of the short wavelength Rayleigh-Taylor instability of foam foil packages driven by a soft x-ray pulse

    International Nuclear Information System (INIS)

    Willi, O.; Pasley, J.; Iwase, A.; Nazarov, W.; Rose, S.J.

    2000-01-01

    The Rayleigh-Taylor instability was studied in the short wavelength regime using single mode targets that were driven by hohlraum radiation allowing the Takabe-Morse roll-over due to ablative stabilisation to be investigated. A temporally shaped soft x-ray drive was generated by focusing one of the PHEBUS laser beams into a gold hohlraum with a maximum radiation temperature of about 120 eV. Thin plastic foils with sinusoidal modulations with wavelengths between 12 and 50 μm, and a perturbation amplitude of about 10% of the wavelength, were used. A low density 50 mg/cc tri-acrylate foam 150 μm in length facing the hohlraum was attached to the modulated foam target. The targets were radiographed face-on at an x-ray energy of about 1.3 keV with a spatial resolution of about 5 μm using a Wolter-like x-ray microscope coupled to an x-ray streak camera with a temporal resolution of 50 ps. The acceleration was obtained from side-on radiography. 2-D hydrodynamic code simulations have been carried out to compare the experimental results with the simulations. (authors)

  14. Radiosensitivity of soft tissue sarcomas

    International Nuclear Information System (INIS)

    Hirano, Toru; Iwasaki, Katsuro; Suzuki, Ryohei; Monzen, Yoshio; Hombo, Zenichiro

    1989-01-01

    The correlation between the effectiveness of radiation therapy and the histology of soft tissue sarcomas was investigated. Of 31 cases with a soft tissue sarcoma of an extremity treated by conservative surgery and postoperative radiation of 3,000-6,000 cGy, local recurrence occurred in 12; 5 out of 7 synovial sarcomas, 4 of 9 MFH, one of 8 liposarcomas, none of 4 rhabdomyosarcomas and 2 of 3 others. As for the histological subtyping, the 31 soft tissue sarcomas were divided into spindle cell, pleomorphic cell, myxoid and round cell type, and recurrence rates were 75%, 33.3%, 16.7% and 0%, respectively. From the remarkable difference in recurrent rate, it was suggested that round cell and myxoid type of soft tissue sarcomas showed a high radiosensitivity compared to the spindle cell type with low sensitivity. Clarifying the degree of radiosensitivity is helpful in deciding on the management of limb salvage in soft tissue sarcomas of an extremity. (author)

  15. The soft notion of China's 'soft power'

    OpenAIRE

    Breslin, Shaun

    2011-01-01

    · Although debates over Chinese soft power have increased in\\ud recent years, there is no shared definition of what ‘soft power’\\ud actually means. The definition seems to change depending on\\ud what the observer wants to argue.\\ud · External analyses of soft power often include a focus on\\ud economic relations and other material (hard) sources of power\\ud and influence.\\ud · Many Chinese analyses of soft power focus on the promotion of a\\ud preferred (positive) understanding of China’s inter...

  16. MANAGEMENT SOFT-FACTORS IN INDUSTRIES

    Directory of Open Access Journals (Sweden)

    L. V. Fatkin

    2012-01-01

    Full Text Available No proper attention is given in existing management theories and concepts to systematization and analysis of non-material management factors, so-called «soft-factors». In industries, management soft-factors may be treated in a broader way. An example of a broader treatment of management soft-factors is given for the system of state regulation of foreign trade activities in industries along with specification, determination and rating of organizational and administrative management soft-factors.

  17. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  18. A framework to assess diagnosis error probabilities in the advanced MCR

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ar Ryum; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Kim, Jong Hyun [Chosun University, Gwangju (Korea, Republic of); Jang, Inseok; Park, Jinkyun [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The Institute of Nuclear Power Operations (INPO)’s operating experience database revealed that about 48% of the total events in world NPPs for 2 years (2010-2011) happened due to human errors. The purposes of human reliability analysis (HRA) method are to evaluate the potential for, and mechanism of, human errors that may affect plant safety. Accordingly, various HRA methods have been developed such as technique for human error rate prediction (THERP), simplified plant analysis risk human reliability assessment (SPAR-H), cognitive reliability and error analysis method (CREAM) and so on. Many researchers have asserted that procedure, alarm, and display are critical factors to affect operators’ generic activities, especially for diagnosis activities. None of various HRA methods was explicitly designed to deal with digital systems. SCHEME (Soft Control Human error Evaluation MEthod) considers only for the probability of soft control execution error in the advanced MCR. The necessity of developing HRA methods in various conditions of NPPs has been raised. In this research, the framework to estimate diagnosis error probabilities in the advanced MCR was suggested. The assessment framework was suggested by three steps. The first step is to investigate diagnosis errors and calculate their probabilities. The second step is to quantitatively estimate PSFs’ weightings in the advanced MCR. The third step is to suggest the updated TRC model to assess the nominal diagnosis error probabilities. Additionally, the proposed framework was applied by using the full-scope simulation. Experiments conducted in domestic full-scope simulator and HAMMLAB were used as data-source. Total eighteen tasks were analyzed and twenty-three crews participated in.

  19. Dimensioning of multiservice links taking account of soft blocking

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk; Stepanov, S.N.; Kostrov, A.V.

    2006-01-01

    of a multiservice link taking into account the possibility of soft blocking. An approximate algorithm for estimation of main performance measures is constructed. The error of estimation is numerically studied for different types of soft blocking. The optimal procedure of dimensioning is suggested....

  20. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  1. Necrotizing Soft Tissue Infection

    Directory of Open Access Journals (Sweden)

    Sahil Aggarwal, BS

    2018-04-01

    Full Text Available History of present illness: A 71-year-old woman with a history of metastatic ovarian cancer presented with sudden onset, rapidly progressing painful rash in the genital region and lower abdominal wall. She was febrile to 103°F, heart rate was 114 beats per minute, and respiratory rate was 24 per minute. Her exam was notable for a toxic-appearing female with extensive areas of erythema, tenderness, and induration to her lower abdomen, intertriginous areas, and perineum with intermittent segments of crepitus without hemorrhagic bullae or skin breakdown. Significant findings: Computed tomography (CT of the abdominal and pelvis with intravenous (IV contrast revealed inflammatory changes, including gas and fluid collections within the ventral abdominal wall extending to the vulva, consistent with a necrotizing soft tissue infection. Discussion: Necrotizing fasciitis is a serious infection of the skin and soft tissues that requires an early diagnosis to reduce morbidity and mortality. Classified into several subtypes based on the type of microbial infection, necrotizing fasciitis can rapidly progress to septic shock or death if left untreated.1 Diagnosing necrotizing fasciitis requires a high index of suspicion based on patient risk factors, presentation, and exam findings. Definitive treatment involves prompt surgical exploration and debridement coupled with IV antibiotics.2,3 Clinical characteristics such as swelling, disproportionate pain, erythema, crepitus, and necrotic tissue should be a guide to further diagnostic tests.4 Unfortunately, lab values such as white blood cell count and lactate imaging studies have high sensitivity but low specificity, making the diagnosis of necrotizing fasciitis still largely a clinical one.4,5 CT is a reliable method to exclude the diagnosis of necrotizing soft tissue infections (sensitivity of 100%, but is only moderately reliable in correctly identifying such infections (specificity of 81%.5 Given the emergent

  2. Protein synthesis rate measured with l-[1-{sup 11}C]tyrosine positron emission tomography correlates with mitotic activity and MIB-1 antibody-detected proliferation in human soft tissue sarcomas

    Energy Technology Data Exchange (ETDEWEB)

    Plaat, B.; Mastik, M.; Molenaar, W. [Department of Pathology, University Hospital Groningen (Netherlands); Kole, A.; Vaalburg, W. [PET Centre, University Hospital Groningen (Netherlands); Hoekstra, H. [Department of Surgical Oncology, University Hospital Groningen (Netherlands)

    1999-04-29

    Protein synthesis rate (PSR) can be assessed in vivo using positron emission tomography with l-[1-{sup 11}C]tyrosine (TYR-PET). Biological activity of soft tissue sarcomas (STS) can be measured in vitro by the mitotic rate and number of proliferating cells. In STS the grade of malignancy, in which the mitotic index plays a major role, is considered to be the major standard in predicting biological tumour behaviour. This study was designed to test the validity of TYR-PET in relation to different histopathological features. In 21 patients with untreated STS, the PSR was measured using TYR-PET. The number of mitoses was counted and tumours were graded according to the grading system of Coindre et al. (Cancer 1986; 58:306-309). Proliferative activity was assessed by immunohistological detection of the Ki-67 nuclear antigen using MIB-1 monoclonal antibody. To test the association between the PSR and these tumour parameters, a correlation analysis was performed. A significant (P<0.05) correlation was found between PSR and the Ki-67 proliferation index (R = 0.54), and between PSR and mitotic rate (R = 0.64). There was no correlation between PSR and tumour grade. The present study in malignant soft tissue tumours relates in vivo tumour metabolism as established with TYR-PET to tumour activity measured in vitro and indicates that the non-invasive method of TYR-PET can estimate the mitotic and proliferative activity in STS. (orig.) With 2 figs., 2 tabs., 30 refs.

  3. Soft, Embodied, Situated & Connected

    DEFF Research Database (Denmark)

    Tomico, Oscar; Wilde, Danielle

    2015-01-01

    Soft wearables include clothing and textile-based accessories that incorporate smart textiles and soft electronic interfaces to enable responsive and interactive experiences. When designed well, they leverage the cultural, sociological and material qualities of textiles, fashion and dress; divers...

  4. Soft, embodied, situated & connected

    NARCIS (Netherlands)

    Tomico Plasencia, O.; Wilde, D.

    2015-01-01

    Soft wearables include clothing and textile-based accessories that incorporate smart textiles and soft electronic interfaces to enable responsive and interactive experiences. When designed well, they leverage the cultural, sociological and material qualities of textiles, fashion and dress; diverse

  5. Hardware Implementation of A Non-RLL Soft-decoding Beacon-based Visible Light Communication Receiver

    OpenAIRE

    Nguyen, Duc-Phuc; Le, Dinh-Dung; Tran, Thi-Hong; Huynh, Huu-Thuan; Nakashima, Yasuhiko

    2018-01-01

    Visible light communication (VLC)-based beacon systems, which usually transmit identification (ID) information in small-size data frames are applied widely in indoor localization applications. There is one fact that flicker of LED light should be avoid in any VLC systems. Current flicker mitigation solutions based on run-length limited (RLL) codes suffer from reduced code rates, or are limited to hard-decoding forward error correction (FEC) decoders. Recently, soft-decoding techniques of RLL-...

  6. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  7. Prediction of embankment settlement over soft soils.

    Science.gov (United States)

    2009-06-01

    The objective of this project was to review and verify the current design procedures used by TxDOT : to estimate the total and rate of consolidation settlement in embankments constructed on soft soils. Methods : to improve the settlement predictions ...

  8. Soft Congruence Relations over Rings

    Science.gov (United States)

    Xin, Xiaolong; Li, Wenting

    2014-01-01

    Molodtsov introduced the concept of soft sets, which can be seen as a new mathematical tool for dealing with uncertainty. In this paper, we initiate the study of soft congruence relations by using the soft set theory. The notions of soft quotient rings, generalized soft ideals and generalized soft quotient rings, are introduced, and several related properties are investigated. Also, we obtain a one-to-one correspondence between soft congruence relations and idealistic soft rings and a one-to-one correspondence between soft congruence relations and soft ideals. In particular, the first, second, and third soft isomorphism theorems are established, respectively. PMID:24949493

  9. Estimates of rates and errors for measurements of direct-γ and direct-γ + jet production by polarized protons at RHIC

    International Nuclear Information System (INIS)

    Beddo, M.E.; Spinka, H.; Underwood, D.G.

    1992-01-01

    Studies of inclusive direct-γ production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between √s = 50 and 500 GeV. Also, rates were computed for direct-γ + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made

  10. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  11. 45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.

    Science.gov (United States)

    Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile

    2012-07-30

    In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.

  12. Modelling soft error probability in firmware: A case study

    African Journals Online (AJOL)

    The purpose is to estimate the probability that external disruptive events (such as ..... also changed the 16-bit magic variable to its unique 'magic' value. .... is mutually independent, not only over registers but over spikes, such that the above.

  13. Soft matter physics

    CERN Document Server

    Doi, Masao

    2013-01-01

    Soft matter (polymers, colloids, surfactants and liquid crystals) are an important class of materials in modern technology. They also form the basis of many future technologies, for example in medical and environmental applications. Soft matter shows complex behaviour between fluids and solids, and used to be a synonym of complex materials. Due to the developments of the past two decades, soft condensed matter can now be discussed on the same sound physical basis as solid condensedmatter. The purpose of this book is to provide an overview of soft matter for undergraduate and graduate students

  14. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  15. The pitfalls of ultrasonography in the evaluation of soft tissue masses

    International Nuclear Information System (INIS)

    Kwok, Henry CK.; Pinto, Clinton H.; Doyle, Anthony J.

    2012-01-01

    Ultrasonography is associated with a high error rate in the evaluation of soft tissue masses. The purposes of this study were to examine the nature of the diagnostic errors and to identify areas in which reporting could be improved. Patients who had soft tissue tumours and received ultrasonography during a 10-year period (1999–2009) were identified from a local tumour registry. The sonographic and pathological diagnoses were categorised as either ‘benign’ or ‘non-benign’. The accuracy of ultrasonography was assessed by correlating the sonographic with the pathological diagnostic categories. Recommendations from radiologists, where offered, were assessed for their appropriateness in the context of the pathological diagnosis. One hundred seventy-five patients received ultrasonography, of which 60 had ‘non-benign’ lesions and 115 had ‘benign’ lesions. Ultrasonography correctly diagnosed 35 and incorrectly diagnosed seven of the 60 ‘non-benign’ cases, and did not suggest a diagnosis in 18 cases. Most of the diagnostic errors related to misdiagnosing soft tissue tumours as haematomas (four out of seven). Recommendations for further management were offered by the radiologists in 144 cases, of which 52 had ‘non-benign’ pathology. There were eight ‘non-benign’ cases where no recommendation was offered, and the sonographic diagnosis was either incorrect or unavailable. Ultrasonography lacks accuracy in the evaluation of soft tissue masses. Ongoing education is required to improve awareness of the limitations with its use. These limitations should be highlighted to the referrers, especially those who do not have specific training in this area.

  16. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  17. Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning

    Science.gov (United States)

    Meshkat, Leila; Bryant, Larry

    2014-01-01

    Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFE's as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.

  18. Soft, embodied, situated & connected: enriching interactions with soft wearbles

    NARCIS (Netherlands)

    Tomico Plasencia, O.; Wilde, D.

    2016-01-01

    Soft wearables include clothing and textile-based accessories that incorporate smart textiles and soft electronic interfaces to enable responsive and interactive experiences. When designed well, soft wearables leverage the cultural, sociological and material qualities of textiles, fashion and dress;

  19. Naming game with learning errors in communications

    OpenAIRE

    Lou, Yang; Chen, Guanrong

    2014-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network topology. By pair-wise iterative interactions, the population reaches a consensus state asymptotically. In this paper, we study naming game with communication errors during pair-wise conversations, where errors are represented by error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed....

  20. Possibility Fuzzy Soft Set

    Directory of Open Access Journals (Sweden)

    Shawkat Alkhazaleh

    2011-01-01

    Full Text Available We introduce the concept of possibility fuzzy soft set and its operation and study some of its properties. We give applications of this theory in solving a decision-making problem. We also introduce a similarity measure of two possibility fuzzy soft sets and discuss their application in a medical diagnosis problem.

  1. Fixing soft margins

    NARCIS (Netherlands)

    P. Kofman (Paul); A. Vaal, de (Albert); C.G. de Vries (Casper)

    1993-01-01

    textabstractNon-parametric tolerance limits are employed to calculate soft margins such as advocated in Williamson's target zone proposal. In particular, the tradeoff between softness and zone width is quantified. This may be helpful in choosing appropriate margins. Furthermore, it offers

  2. learning and soft skills

    DEFF Research Database (Denmark)

    Rasmussen, Lauge Baungaard

    2000-01-01

    Learning of soft skills are becoming more and more necessary due to the complexe development of modern companies and their environments. However, there seems to be a 'gap' between intentions and reality regarding need of soft skills and the possiblities to be educated in this subject in particular...

  3. Embodying Soft Wearables Research

    DEFF Research Database (Denmark)

    Tomico, Oscar; Wilde, Danielle

    2016-01-01

    of soft wearables. Throughout, we will experiment with how embodied design research techniques might be shared, developed, and used as direct and unmediated vehicles for their own reporting. Rather than engage in oral presentations, participants will lead each other through a proven embodied method...... and knowledge transfer in the context of soft wearables....

  4. Soft buckling actuators

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Dian; Whitesides, George M.

    2017-12-26

    A soft actuator is described, including: a rotation center having a center of mass; a plurality of bucklable, elastic structural components each comprising a wall defining an axis along its longest dimension, the wall connected to the rotation center in a way that the axis is offset from the center of mass in a predetermined direction; and a plurality of cells each disposed between two adjacent bucklable, elastic structural components and configured for connection with a fluid inflation or deflation source; wherein upon the deflation of the cell, the bucklable, elastic structural components are configured to buckle in the predetermined direction. A soft actuating device including a plurality of the soft actuators and methods of actuation using the soft actuator or soft actuating device disclosed herein are also described.

  5. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  6. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  7. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  8. Soft material for optical storage

    International Nuclear Information System (INIS)

    Lucchetti, L.; Simoni, F.

    2000-01-01

    The aim of transforming electronic networking into optical networking is producing a major effort in studying all optical processing and as a consequence in investigating the nonlinear optical properties of materials for this purpose. In this research area soft materials like polymers and liquid crystals are more and more attractive because they are cheap and they are more easily integrated in microcircuits hardware with respect to the well-known highly nonlinear crystals. Since optical processing spans a too wide field to be treated in one single paper, the authors will focus on one specific subject within this field and give a review of the most recent advances in studying the soft-materials properties interesting for the storage of optical information. The efforts in research of new materials and techniques for optical storage are motivated by the need to store and retrieve large amounts of data with short access time and high data rate at a competitive cost

  9. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  10. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  11. PERM Error Rate Findings and Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — Federal agencies are required to annually review programs they administer and identify those that may be susceptible to significant improper payments, to estimate...

  12. Medicare FFS Jurisdiction Error Rate Contribution Data

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services CMS is dedicated to continually strengthening and improving the Medicare program, which provides vital services to...

  13. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  14. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  15. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  16. Bipolar soft connected, bipolar soft disconnected and bipolar soft compact spaces

    Directory of Open Access Journals (Sweden)

    Muhammad Shabir

    2017-06-01

    Full Text Available Bipolar soft topological spaces are mathematical expressions to estimate interpretation of data frameworks. Bipolar soft theory considers the core features of data granules. Bipolarity is important to distinguish between positive information which is guaranteed to be possible and negative information which is forbidden or surely false. Connectedness and compactness are the most important fundamental topological properties. These properties highlight the main features of topological spaces and distinguish one topology from another. Taking this into account, we explore the bipolar soft connectedness, bipolar soft disconnectedness and bipolar soft compactness properties for bipolar soft topological spaces. Moreover, we introduce the notion of bipolar soft disjoint sets, bipolar soft separation, and bipolar soft hereditary property and study on bipolar soft connected and disconnected spaces. By giving the detailed picture of bipolar soft connected and disconnected spaces we investigate bipolar soft compact spaces and derive some results related to this concept.

  17. Hydraulic Soft Yaw System Load Reduction and Prototype Results

    DEFF Research Database (Denmark)

    Stubkier, Søren; Pedersen, Henrik C.; Markussen, Kristian

    2013-01-01

    Introducing a hydraulic soft yaw concept for wind turbines leads to significant load reductions in the wind turbine structure. The soft yaw system operates as a shock absorption system on a car, hence absorbing the loading from turbulent wind conditions instead of leading them into the stiff wind...... turbine structure. Results presented shows fatigue reductions of up to 40% and ultimate load reduction of up to 19%. The ultimate load reduction increases even more when the over load protection system in the hydraulic soft yaw system is introduced and results show how the exact extreme load cut off...... operates. Further it is analyzed how the soft yaw system influence the power production of the turbine. It is shown that the influence is minimal, but at larger yaw errors the effect is possitive. Due to the implemeted functions in the hydraulic soft yaw system such as even load distribution on the pinions...

  18. Soft-Material Robotics

    OpenAIRE

    Wang, L; Nurzaman, SG; Iida, Fumiya

    2017-01-01

    There has been a boost of research activities in robotics using soft materials in the past ten years. It is expected that the use and control of soft materials can help realize robotic systems that are safer, cheaper, and more adaptable than the level that the conventional rigid-material robots can achieve. Contrary to a number of existing review and position papers on soft-material robotics, which mostly present case studies and/or discuss trends and challenges, the review focuses on the fun...

  19. Evaluating six soft approaches

    DEFF Research Database (Denmark)

    Sørensen, Lene; Vidal, Rene Victor Valqui

    2006-01-01

    ’s interactive planning principles to be supported by soft approaches in carrying out the principles in action. These six soft approaches are suitable for supporting various steps of the strategy development and planning process. These are the SWOT analysis, the Future Workshop, the Scenario methodology......, Strategic Option Development and Analysis, Strategic Choice Approach and Soft Systems Methodology. Evaluations of each methodology are carried out using a conceptual framework in which the organisation, the result, the process and the technology of the specific approach are taken into consideration. Using...

  20. Evaluating Six Soft Approaches

    DEFF Research Database (Denmark)

    Sørensen, Lene Tolstrup; Valqui Vidal, René Victor

    2008-01-01

    's interactive planning principles to be supported by soft approaches in carrying out the principles in action. These six soft approaches are suitable forsupporting various steps of the strategy development and planning process. These are the SWOT analysis, the Future Workshop, the Scenario methodology......, Strategic Option Development and Analysis, Strategic Choice Approach and Soft Systems Methodology. Evaluations of each methodology are carried out using a conceptual framework in which the organisation, the result, the process and the technology of the specific approach are taken into consideration. Using...

  1. Evaluating six soft approaches

    DEFF Research Database (Denmark)

    Sørensen, Lene Tolstrup; Vidal, Rene Victor Valqui

    2008-01-01

    's interactive planning principles to be supported by soft approaches in carrying out the principles in action. These six soft approaches are suitable forsupporting various steps of the strategy development and planning process. These are the SWOT analysis, the Future Workshop, the Scenario methodology......, Strategic Option Development and Analysis, Strategic Choice Approach and Soft Systems Methodology. Evaluations of each methodology are carried out using a conceptual framework in which the organisation, the result, the process and the technology of the specific approach are taken into consideration. Using...

  2. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  3. Low energy (soft) x rays

    International Nuclear Information System (INIS)

    Hoshi, Masaharu; Antoku, Shigetoshi; Russell, W.J.; Miller, R.C.; Nakamura, Nori; Mizuno, Masayoshi; Nishio, Shoji.

    1987-05-01

    Dosimetry of low-energy (soft) X rays produced by the SOFTEX Model CMBW-2 was performed using Nuclear Associates Type 30 - 330 PTW, Exradin Type A2, and Shonka-Wyckoff ionization chambers with a Keithley Model 602 electrometer. Thermoluminescent (BeO chip) dosimeters were used with a Harshaw Detector 2000-A and Picoammeter-B readout system. Beam quality measurements were made using aluminum absorbers; exposure rates were assessed by the current of the X-ray tube and by exposure times. Dose distributions were established, and the average factors for non-uniformity were calculated. The means of obtaining accurate absorbed and exposed doses using these methods are discussed. Survival of V79 cells was assessed by irradiating them with soft X rays, 200 kVp X rays, and 60 Co gamma rays. The relative biological effectiveness (RBE) values for soft X rays with 0, 0.2, 0.7 mm added thicknesses of aluminum were 1.6, which were compared to 60 Co. The RBE of 200 kVp X rays relative to 60 Co was 1.3. Results of this study are available for reference in future RERF studies of cell survival. (author)

  4. ATLAS soft QCD results

    CERN Document Server

    Sykora, Tomas; The ATLAS collaboration

    2018-01-01

    Recent results of soft QCD measurements performed by the ATLAS collaboration are reported. The measurements include total, elastic and inelastic cross sections, inclusive spectra, underlying event and particle correlations in p-p and p-Pb collisions.

  5. Dynamics of Soft Matter

    CERN Document Server

    García Sakai, Victoria; Chen, Sow-Hsin

    2012-01-01

    Dynamics of Soft Matter: Neutron Applications provides an overview of neutron scattering techniques that measure temporal and spatial correlations simultaneously, at the microscopic and/or mesoscopic scale. These techniques offer answers to new questions arising at the interface of physics, chemistry, and biology. Knowledge of the dynamics at these levels is crucial to understanding the soft matter field, which includes colloids, polymers, membranes, biological macromolecules, foams, emulsions towards biological & biomimetic systems, and phenomena involving wetting, friction, adhesion, or micr

  6. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  7. Protocol for the PINCER trial: a cluster randomised trial comparing the effectiveness of a pharmacist-led IT-based intervention with simple feedback in reducing rates of clinically important errors in medicines management in general practices

    Directory of Open Access Journals (Sweden)

    Murray Scott A

    2009-05-01

    ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05 to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial and the interventions have been delivered. Analysis has not yet been undertaken. Trial registration Current controlled trials ISRCTN21785299

  8. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    dictionary plays a key role in the speech recognition accuracy. .... Sophisticated microphone is used for the recording speech corpus in a noise free environment. .... values, word error rate (WER) and error-rate will be calculated as follows:.

  9. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  10. Teaching Soft Skills Employers Need

    Science.gov (United States)

    Ellis, Maureen; Kisling, Eric; Hackworth, Robbie G.

    2014-01-01

    This study identifies the soft skills community colleges teach in an office technology course and determines whether the skills taught are congruent with the soft skills employers require in today's entry-level office work. A qualitative content analysis of a community college office technology soft skills course was performed using 23 soft skills…

  11. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  12. [Diagnostic and organizational error in head injuries].

    Science.gov (United States)

    Zaba, Czesław; Zaba, Zbigniew; Swiderski, Paweł; Lorkiewicz-Muszyíska, Dorota

    2009-01-01

    The study aimed at presenting a case of a diagnostic and organizational error involving lack of detection of foreign body presence in the soft tissues of the head. Head radiograms in two projections clearly demonstrated foreign bodies that resembled in shape flattened bullets, which could not have been missed upon evaluation of the X-rays. On the other hand, description of the radiograms entered by the attending physicians to the patient's medical record indicated an absence of traumatic injuries or foreign bodies. In the opinion of the authors, the case in question involved a diagnostic error: the doctors failed to detect the presence of foreign bodies in the head. The organizational error involved the failure of radiogram evaluation performed by a radiologist.

  13. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  14. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  15. Frequency and Severity of Parenteral Nutrition Medication Errors at a Large Children's Hospital After Implementation of Electronic Ordering and Compounding.

    Science.gov (United States)

    MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery

    2016-04-01

    The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process

  16. Optimization of intelligent infusion pump technology to minimize vasopressor pump programming errors.

    Science.gov (United States)

    Vadiei, Nina; Shuman, Carrie A; Murthy, Manasa S; Daley, Mitchell J

    2017-08-01

    There is a lack of data evaluating the impact of hard limit implementation into intelligent infusion pump technology (IIPT). The purpose of this study was to determine if incorporation of vasopressor upper hard limits (UHL) into IIPT increases efficacy of alerts by preventing pump programming errors. Retrospective review from five hospitals within a single healthcare network between April 1, 2013 and May 31, 2014. A total of 65,680 vasopressor data entries were evaluated; 19,377 prior to hard limit implementation and 46,303 after hard limit implementation. The primary outcome was the percent of effective alerts. The secondary outcome was the proportional dose increase from the soft limit provided. A reduction in alert rate occurred after incorporation of hard limits to the IIPT drug library (pre-UHL 4.7% vs. post-UHL 4.0%) with a subsequent increase in the number of errors prevented as represented by a higher effective alert rate (pre-UHL 23.0% vs. post-UHL 37.3%; p < 0.001). The proportional dose increase was significantly reduced (pre-UHL 188% ± 380%] vs. post-UHL 95% ± 128%; p < 0.001). Incorporation of UHLs into IIPT in a multi-site health system with varying intensive care unit and emergency department acuity increases alert effectiveness, reduces dosing errors, and reduces the magnitude of dosing errors that reach the patient.

  17. Spacecraft and propulsion technician error

    Science.gov (United States)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  18. Hard and Soft Governance

    DEFF Research Database (Denmark)

    Moos, Lejf

    2009-01-01

    of Denmark, and finally the third layer: the leadership used in Danish schools. The use of 'soft governance' is shifting the focus of governance and leadership from decisions towards influence and power and thus shifting the focus of the processes from the decision-making itself towards more focus......The governance and leadership at transnational, national and school level seem to be converging into a number of isomorphic forms as we see a tendency towards substituting 'hard' forms of governance, that are legally binding, with 'soft' forms based on persuasion and advice. This article analyses...... and discusses governance forms at several levels. The first layer is the global: the methods of 'soft governance' that are being utilised by transnational agencies. The second layer is the national and local: the shift in national and local governance seen in many countries, but here demonstrated in the case...

  19. Large poroelastic deformation of a soft material

    Science.gov (United States)

    MacMinn, Christopher W.; Dufresne, Eric R.; Wettlaufer, John S.

    2014-11-01

    Flow through a porous material will drive mechanical deformation when the fluid pressure becomes comparable to the stiffness of the solid skeleton. This has applications ranging from hydraulic fracture for recovery of shale gas, where fluid is injected at high pressure, to the mechanics of biological cells and tissues, where the solid skeleton is very soft. The traditional linear theory of poroelasticity captures this fluid-solid coupling by combining Darcy's law with linear elasticity. However, linear elasticity is only volume-conservative to first order in the strain, which can become problematic when damage, plasticity, or extreme softness lead to large deformations. Here, we compare the predictions of linear poroelasticity with those of a large-deformation framework in the context of two model problems. We show that errors in volume conservation are compounded and amplified by coupling with the fluid flow, and can become important even when the deformation is small. We also illustrate these results with a laboratory experiment.

  20. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Systematic error growth rate peak is observed at wavenumber 2 up to 4-day forecast then .... the influence of summer systematic error and ran- ... total exchange. When the error energy budgets are examined in spectral domain, one may ask ques- tions on the error growth at a certain wavenum- ber from its interaction with ...

  1. Soft and hard pomerons

    International Nuclear Information System (INIS)

    Maor, Uri; Tel Aviv Univ.

    1995-09-01

    The role of s-channel unitarity screening corrections, calculated in the eikonal approximation, is investigated for soft Pomeron exchange responsible for elastic and diffractive hadron scattering in the high energy limit. We examine the differences between our results and those obtained from the supercritical Pomeron-Regge model with no such corrections. It is shown that screening saturation is attained at different scales for different channels. We then proceed to discuss the new HERA data on hard (PQCD) Pomeron diffractive channels and discuss the relationship between the soft and hard Pomerons and the relevance of our analysis to this problem. (author). 18 refs, 9 figs, 1 tab

  2. Mechanics of soft materials

    CERN Document Server

    Volokh, Konstantin

    2016-01-01

    This book provides a concise introduction to soft matter modelling. It offers an up-to-date review of continuum mechanical description of soft and biological materials from the basics to the latest scientific materials. It includes multi-physics descriptions, such as chemo-, thermo-, electro- mechanical coupling. It derives from a graduate course at Technion that has been established in recent years. It presents original explanations for some standard materials and features elaborated examples on all topics throughout the text. PowerPoint lecture notes can be provided to instructors. .

  3. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  4. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  5. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Mappings on Neutrosophic Soft Classes

    Directory of Open Access Journals (Sweden)

    Shawkat Alkhazaleh

    2014-03-01

    Full Text Available In 1995 Smarandache introduced the concept of neutrosophic set which is a mathematical tool for handling problems involving imprecise, indeterminacy and inconsistent data. In 2013 Maji introduced the concept of neutrosophic soft set theory as a general mathematical tool for dealing with uncertainty. In this paper we define the notion of a mapping on classes where the neutrosophic soft classes are collections of neutrosophic soft set. We also define and study the properties of neutrosophic soft images and neutrosophic soft inverse images of neutrosophic soft sets.

  7. Bangladesh looks for a soft loan

    International Nuclear Information System (INIS)

    Hossain, A.

    1986-01-01

    The problems faced by developing countries in embarking on a nuclear power programme are considered. It is argued that an international funding agency should be set up by the IAEA and the World Bank to provide developing countries with help in the form of a loan at soft interest rates and longer repayment periods. (U.K.)

  8. Soft-decision decoding of RS codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2005-01-01

    By introducing a few simplifying assumptions we derive a simple condition for successful decoding using the Koetter-Vardy algorithm for soft-decision decoding of RS codes. We show that the algorithm has a significant advantage over hard decision decoding when the code rate is low, when two or more...

  9. Novel experimentally observed phenomena in soft matter

    Indian Academy of Sciences (India)

    The resulting flow is non-Newtonian and is characterized by features such as shear rate-dependent viscosities and nonzero normal stresses. This article begins with an introduction to some unusual flow properties displayed by soft matter. Experiments that report a spectrum of novel phenomena exhibited by these materials, ...

  10. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  11. Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software

    Science.gov (United States)

    Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg

    2017-09-01

    100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.

  12. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  13. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  14. Soft actuators and soft actuating devices

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Dian; Whitesides, George M.

    2017-10-17

    A soft buckling linear actuator is described, including: a plurality of substantially parallel bucklable, elastic structural components each having its longest dimension along a first axis; and a plurality of secondary structural components each disposed between and bridging two adjacent bucklable, elastic structural components; wherein every two adjacent bucklable, elastic structural components and the secondary structural components in-between define a layer comprising a plurality of cells each capable of being connected with a fluid inflation or deflation source; the secondary structural components from two adjacent layers are not aligned along a second axis perpendicular to the first axis; and the secondary structural components are configured not to buckle, the bucklable, elastic structural components are configured to buckle along the second axis to generate a linear force, upon the inflation or deflation of the cells. Methods of actuation using the same are also described.

  15. Soft Tissue Extramedullary Plasmacytoma

    Directory of Open Access Journals (Sweden)

    Fernando Ruiz Santiago

    2010-01-01

    Full Text Available We present the uncommon case of a subcutaneous fascia-based extramedullary plasmacytoma in the leg, which was confirmed by the pathology report and followed up until its remission. We report the differential diagnosis with other more common soft tissue masses. Imaging findings are nonspecific but are important to determine the tumour extension and to plan the biopsy.

  16. On Soft Biometrics

    DEFF Research Database (Denmark)

    Nixon, Mark; Correia, Paulo; Nasrollahi, Kamal

    2015-01-01

    Innovation has formed much of the rich history in biometrics. The field of soft biometrics was originally aimed to augment the recognition process by fusion of metrics that were sufficient to discriminate populations rather than individuals. This was later refined to use measures that could be us...

  17. Soft Matter Characterization

    CERN Document Server

    Borsali, Redouane

    2008-01-01

    Progress in basic soft matter research is driven largely by the experimental techniques available. Much of the work is concerned with understanding them at the microscopic level, especially at the nanometer length scales that give soft matter studies a wide overlap with nanotechnology. This 2 volume reference work, split into 4 parts, presents detailed discussions of many of the major techniques commonly used as well as some of those in current development for studying and manipulating soft matter. The articles are intended to be accessible to the interdisciplinary audience (at the graduate student level and above) that is or will be engaged in soft matter studies or those in other disciplines who wish to view some of the research methods in this fascinating field. Part 1 contains articles with a largely (but, in most cases, not exclusively) theoretical content and/or that cover material relevant to more than one of the techniques covered in subsequent volumes. It includes an introductory chapter on some of t...

  18. Soft x-ray source by laser produced Xe plasma

    International Nuclear Information System (INIS)

    Amano, Sho; Masuda, Kazuya; Miyamoto, Shuji; Mochizuki, Takayasu

    2010-01-01

    The laser plasma soft X-ray source in the wavelength rage of 5-17 nm was developed, which consisted of the rotating drum system supplying cryogenic Xe target and the high repetition rate pulse Nd:YAG slab laser. We found the maximum conversion efficiency of 30% and it demonstrated the soft X-ray generation with the high repetition rate pulse of 320 pps and the high average power of 20 W. The soft X-ray cylindrical mirror was developed and successfully focused the soft X-ray with an energy intensity of 1.3 mJ/cm 2 . We also succeeded in the plasma debris mitigation with Ar gas. This will allow a long lifetime of the mirror and a focusing power intensity of 400 mW/cm 2 with 320 pps. The high power soft X-ray is useful for various applications. (author)

  19. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  20. Critical issues in soft rocks

    OpenAIRE

    Milton Assis Kanji

    2014-01-01

    This paper discusses several efforts made to study and investigate soft rocks, as well as their physico-mechanical characteristics recognized up to now, the problems in their sampling and testing, and the possibility of its reproduction through artificially made soft rocks. The problems in utilizing current and widespread classification systems to some types of weak rocks are also discussed, as well as other problems related to them. Some examples of engineering works in soft rock or in soft ...

  1. Soft skills and dental education

    OpenAIRE

    Gonzalez, M. A. G.; Abu Kasim, N. H.; Naimie, Z.

    2014-01-01

    Soft skills and hard skills are essential in the practice of dentistry. While hard skills deal with technical proficiency, soft skills relate to a personal values and interpersonal skills that determine a person's ability to fit in a particular situation. These skills contribute to the success of organisations that deal face-to-face with clients. Effective soft skills benefit the dental practice. However, the teaching of soft skills remains a challenge to dental schools. This paper discusses ...

  2. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  3. Collection of offshore human error probability data

    International Nuclear Information System (INIS)

    Basra, Gurpreet; Kirwan, Barry

    1998-01-01

    Accidents such as Piper Alpha have increased concern about the effects of human errors in complex systems. Such accidents can in theory be predicted and prevented by risk assessment, and in particular human reliability assessment (HRA), but HRA ideally requires qualitative and quantitative human error data. A research initiative at the University of Birmingham led to the development of CORE-DATA, a Computerised Human Error Data Base. This system currently contains a reasonably large number of human error data points, collected from a variety of mainly nuclear-power related sources. This article outlines a recent offshore data collection study, concerned with collecting lifeboat evacuation data. Data collection methods are outlined and a selection of human error probabilities generated as a result of the study are provided. These data give insights into the type of errors and human failure rates that could be utilised to support offshore risk analyses

  4. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  5. On Neutrosophic Soft Topological Space

    Directory of Open Access Journals (Sweden)

    Tuhin Bera

    2018-03-01

    Full Text Available In this paper, the concept of connectedness and compactness on neutrosophic soft topological space have been introduced along with the investigation of their several characteristics. Some related theorems have been established also. Then, the notion of neutrosophic soft continuous mapping on a neutrosophic soft topological space and it’s properties are developed here.

  6. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  7. Soft shoulders ahead: spurious signatures of soft and partial selective sweeps result from linked hard sweeps.

    Science.gov (United States)

    Schrider, Daniel R; Mendes, Fábio K; Hahn, Matthew W; Kern, Andrew D

    2015-05-01

    Characterizing the nature of the adaptive process at the genetic level is a central goal for population genetics. In particular, we know little about the sources of adaptive substitution or about the number of adaptive variants currently segregating in nature. Historically, population geneticists have focused attention on the hard-sweep model of adaptation in which a de novo beneficial mutation arises and rapidly fixes in a population. Recently more attention has been given to soft-sweep models, in which alleles that were previously neutral, or nearly so, drift until such a time as the environment shifts and their selection coefficient changes to become beneficial. It remains an active and difficult problem, however, to tease apart the telltale signatures of hard vs. soft sweeps in genomic polymorphism data. Through extensive simulations of hard- and soft-sweep models, here we show that indeed the two might not be separable through the use of simple summary statistics. In particular, it seems that recombination in regions linked to, but distant from, sites of hard sweeps can create patterns of polymorphism that closely mirror what is expected to be found near soft sweeps. We find that a very similar situation arises when using haplotype-based statistics that are aimed at detecting partial or ongoing selective sweeps, such that it is difficult to distinguish the shoulder of a hard sweep from the center of a partial sweep. While knowing the location of the selected site mitigates this problem slightly, we show that stochasticity in signatures of natural selection will frequently cause the signal to reach its zenith far from this site and that this effect is more severe for soft sweeps; thus inferences of the target as well as the mode of positive selection may be inaccurate. In addition, both the time since a sweep ends and biologically realistic levels of allelic gene conversion lead to errors in the classification and identification of selective sweeps. This

  8. Multi-bits error detection and fast recovery in RISC cores

    International Nuclear Information System (INIS)

    Wang Jing; Yang Xing; Zhang Weigong; Shen Jiao; Qiu Keni; Zhao Yuanfu

    2015-01-01

    The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap. (paper)

  9. Multi-bits error detection and fast recovery in RISC cores

    Science.gov (United States)

    Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu

    2015-11-01

    The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.

  10. Clinical management of soft tissue sarcomas

    International Nuclear Information System (INIS)

    Pinedo, H.M.; Verweij, J.

    1986-01-01

    This book is concerned with the clinical management of soft tissue sarcomas. Topics covered include: Radiotherapy; Pathology of soft tissue sarcomas; Surgical treatment of soft tissue sarcomas; and Chemotherapy in advanced soft tissue sarcomas

  11. Mapping on complex neutrosophic soft expert sets

    Science.gov (United States)

    Al-Quran, Ashraf; Hassan, Nasruddin

    2018-04-01

    We introduce the mapping on complex neutrosophic soft expert sets. Further, we investigated the basic operations and other related properties of complex neutrosophic soft expert image and complex neutrosophic soft expert inverse image of complex neutrosophic soft expert sets.

  12. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    Science.gov (United States)

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  13. Clinical errors and medical negligence.

    Science.gov (United States)

    Oyebode, Femi

    2013-01-01

    This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.

  14. Teamwork and Clinical Error Reporting among Nurses in Korean Hospitals

    Directory of Open Access Journals (Sweden)

    Jee-In Hwang, PhD

    2015-03-01

    Conclusions: Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety.

  15. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  16. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  17. Soft options. Sanfte Alternativen

    Energy Technology Data Exchange (ETDEWEB)

    Lutz, R

    1981-01-01

    This collection of contributions made by supporters of the ''soft approach'' is intended to provide an insight into a conceivable future which is quite different from traditional ideas on social and economic developments based on the usual economic thinking and conventional energy sources. The chapter entitled ''The new world view'' shows the way from a machine-like paradigm to a living example in science. In the chapter entitled ''Women are organizing their future'' female perspectives and concepts of solutions are described. In the chapter ''Eco-tecture'' examples of living architecture and of environment formation are presented. In the chapter ''Soft technology'' approaches to an ecology-oriented technology are discussed, and in the chapter ''Network and future workshops'' novel forms of organization and communication are described.

  18. Biological Soft Robotics.

    Science.gov (United States)

    Feinberg, Adam W

    2015-01-01

    In nature, nanometer-scale molecular motors are used to generate force within cells for diverse processes from transcription and transport to muscle contraction. This adaptability and scalability across wide temporal, spatial, and force regimes have spurred the development of biological soft robotic systems that seek to mimic and extend these capabilities. This review describes how molecular motors are hierarchically organized into larger-scale structures in order to provide a basic understanding of how these systems work in nature and the complexity and functionality we hope to replicate in biological soft robotics. These span the subcellular scale to macroscale, and this article focuses on the integration of biological components with synthetic materials, coupled with bioinspired robotic design. Key examples include nanoscale molecular motor-powered actuators, microscale bacteria-controlled devices, and macroscale muscle-powered robots that grasp, walk, and swim. Finally, the current challenges and future opportunities in the field are addressed.

  19. Fracture in Soft Materials

    DEFF Research Database (Denmark)

    Hassager, Ole

    Fracture is a phenomenon that is generally associated with solids. A key element in fracture theory is the so-called weakest link idea that fracture initiates from the largest pre-existing material imperfection. However, recent work has demonstrated that fracture can also happen in liquids, where...... surface tension will act to suppress such imperfections. Therefore, the weakest link idea does not seem immediately applicable to fracture in liquids. This presentation will review fracture in liquids and argue that fracture in soft liquids is a material property independent of pre-existing imperfections....... The following questions then emerge: What is the material description needed to predict crack initiation, crack speed and crack shape in soft materials and liquids....

  20. CHARACTERIZATIONS OF FUZZY SOFT PRE SEPARATION AXIOMS

    OpenAIRE

    El-Latif, Alaa Mohamed Abd

    2015-01-01

    − The notions of fuzzy pre open soft sets and fuzzy pre closed soft sets were introducedby Abd El-latif et al. [2]. In this paper, we continue the study on fuzzy soft topological spaces andinvestigate the properties of fuzzy pre open soft sets, fuzzy pre closed soft sets and study variousproperties and notions related to these structures. In particular, we study the relationship betweenfuzzy pre soft interior fuzzy pre soft closure. Moreover, we study the properties of fuzzy soft pre regulars...

  1. Holiday fun with soft gluons

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Emissions of soft gluons from energetic particles play an important role in collider processes. While the basic physics of soft emissions is simple, it gives rise to a variety of interesting and intricate phenomena (non-global logs, Glauber phases, super-leading logs, factorization breaking). After an introduction, I will review progress in resummation methods such as Soft-Collinear Effective Theory driven by a better understanding of soft emissions. I will also show some new results for computations of soft-gluon effects in gap-between-jets and isolation-cone cross sections.

  2. Reptile Soft Tissue Surgery.

    Science.gov (United States)

    Di Girolamo, Nicola; Mans, Christoph

    2016-01-01

    The surgical approach to reptiles can be challenging. Reptiles have unique physiologic, anatomic, and pathologic differences. This may result in frustrating surgical experiences. However, recent investigations provided novel, less invasive, surgical techniques. The purpose of this review was to describe the technical aspects behind soft tissue surgical techniques that have been used in reptiles, so as to provide a general guideline for veterinarians working with reptiles. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  4. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  5. Alveolar Soft Part Sarcoma.

    Science.gov (United States)

    Jaber, Omar I; Kirby, Patricia A

    2015-11-01

    Alveolar soft part sarcoma is a rare neoplasm usually arising in the soft tissues of the lower limbs in adults and in the head and neck region in children. It presents primarily as a slowly growing mass or as metastatic disease. It is characterized by a specific chromosomal alteration, der(17)t(X:17)(p11:q25), resulting in fusion of the transcription factor E3 (TFE3) with alveolar soft part sarcoma critical region 1 (ASPSCR1) at 17q25. This translocation is diagnostically useful because the tumor nuclei are positive for TFE3 by immunohistochemistry. Real-time polymerase chain reaction to detect the ASPSCR1-TFE3 fusion transcript on paraffin-embedded tissue blocks has been shown to be more sensitive and specific than detection of TFE3 by immunohistochemical stain. Cathepsin K is a relatively recent immunohistochemical stain that can aid in the diagnosis. The recent discovery of the role of the ASPSCR1-TFE3 fusion protein in the MET proto-oncogene signaling pathway promoting angiogenesis and cell proliferation offers a promising targeted molecular therapy.

  6. A Conceptual Framework of Human Reliability Analysis for Execution Human Error in NPP Advanced MCRs

    International Nuclear Information System (INIS)

    Jang, In Seok; Kim, Ar Ryum; Seong, Poong Hyun; Jung, Won Dea

    2014-01-01

    The operation environment of Main Control Rooms (MCRs) in Nuclear Power Plants (NPPs) has changed with the adoption of new human-system interfaces that are based on computer-based technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called Advanced MCRs. Among the many features of Advanced MCRs, soft controls are a particularly important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, and touch screens, operators can select a specific screen, then choose the controller, and finally manipulate the given devices. Due to the different interfaces between soft control and hardwired conventional type control, different human error probabilities and a new Human Reliability Analysis (HRA) framework should be considered in the HRA for advanced MCRs. In other words, new human error modes should be considered for interface management tasks such as navigation tasks, and icon (device) selection tasks in monitors and a new framework of HRA method taking these newly generated human error modes into account should be considered. In this paper, a conceptual framework for a HRA method for the evaluation of soft control execution human error in advanced MCRs is suggested by analyzing soft control tasks

  7. A Conceptual Framework of Human Reliability Analysis for Execution Human Error in NPP Advanced MCRs

    Energy Technology Data Exchange (ETDEWEB)

    Jang, In Seok; Kim, Ar Ryum; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-08-15

    The operation environment of Main Control Rooms (MCRs) in Nuclear Power Plants (NPPs) has changed with the adoption of new human-system interfaces that are based on computer-based technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called Advanced MCRs. Among the many features of Advanced MCRs, soft controls are a particularly important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, and touch screens, operators can select a specific screen, then choose the controller, and finally manipulate the given devices. Due to the different interfaces between soft control and hardwired conventional type control, different human error probabilities and a new Human Reliability Analysis (HRA) framework should be considered in the HRA for advanced MCRs. In other words, new human error modes should be considered for interface management tasks such as navigation tasks, and icon (device) selection tasks in monitors and a new framework of HRA method taking these newly generated human error modes into account should be considered. In this paper, a conceptual framework for a HRA method for the evaluation of soft control execution human error in advanced MCRs is suggested by analyzing soft control tasks.

  8. Corrosion of aluminium in soft drinks.

    Science.gov (United States)

    Seruga, M; Hasenay, D

    1996-04-01

    The corrosion of aluminium (Al) in several brands of soft drinks (cola- and citrate-based drinks) has been studied, using an electrochemical method, namely potentiodynamic polarization. The results show that the corrosion of Al in soft drinks is a very slow, time-dependent and complex process, strongly influenced by the passivation, complexation and adsorption processes. The corrosion of Al in these drinks occurs principally due to the presence of acids: citric acid in citrate-based drinks and orthophosphoric acid in cola-based drinks. The corrosion rate of Al rose with an increase in the acidity of soft drinks, i.e. with increase of the content of total acids. The corrosion rates are much higher in the cola-based drinks than those in citrate-based drinks, due to the facts that: (1) orthophosphoric acid is more corrosive to Al than is citric acid, (2) a quite different passive oxide layer (with different properties) is formed on Al, depending on whether the drink is cola or citrate based. The method of potentiodynamic polarization was shown as being very suitable for the study of corrosion of Al in soft drinks, especially if it is combined with some non-electrochemical method, e.g. graphite furnace atomic absorption spectrometry (GFAAS).

  9. Triggering soft bombs at the LHC

    Science.gov (United States)

    Knapen, Simon; Griso, Simone Pagan; Papucci, Michele; Robinson, Dean J.

    2017-08-01

    Very high multiplicity, spherically-symmetric distributions of soft particles, with p T ˜ few×100 MeV, may be a signature of strongly-coupled hidden valleys that exhibit long, efficient showering windows. With traditional triggers, such `soft bomb' events closely resemble pile-up and are therefore only recorded with minimum bias triggers at a very low efficiency. We demonstrate a proof-of-concept for a high-level triggering strategy that efficiently separates soft bombs from pile-up by searching for a `belt of fire': a high density band of hits on the innermost layer of the tracker. Seeding our proposed high-level trigger with existing jet, missing transverse energy or lepton hardware-level triggers, we show that net trigger efficiencies of order 10% are possible for bombs of mass several × 100 GeV. We also consider the special case that soft bombs are the result of an exotic decay of the 125 GeV Higgs. The fiducial rate for `Higgs bombs' triggered in this manner is marginally higher than the rate achievable by triggering directly on a hard muon from associated Higgs production.

  10. SOFT AND SOFTER HANDOVER PERFORMANCE OF CDMA

    Directory of Open Access Journals (Sweden)

    Lina wati

    2010-12-01

    Full Text Available One of telecommunication providers in Indonesia applies CDMA2000 1x technology. The technology hasmany advantages such as larger channel capacity of BTS (Base Transceiver Station. On the other hand, thecapacity depends on user’s density. Therefore to guarantee voice connection when user or mobile station (MS isalways moving from one cell to others, handover technique is needed. However the technique can be failed for manyreasons. Therefore impact of call attempt on softer and soft handover performance is investigated. Hence the paperexamined soft – softer handover performance of CDMA in BTS (Base Transceiver Station, BSC, and Sectors inboth sub-urban and rural areas in Denpasar, Bali with area code of 0361.The research has been done in rural and suburban area with call area code ‘0361’. The analyses includedregression and simple linear correlation applications. The results showed that number of call attempts affected thefailure of soft and softer handover technique dominantly. Generally, average level of success both handover in bothrural and suburban area were about 99% which are above KPI (Key Performance Indicator reference at 98.50%.However in rural area, other factors such as blocking called attempt and error called number have caused the softerhandover failure.

  11. Soft Ultrathin Electronics Innervated Adaptive Fully Soft Robots.

    Science.gov (United States)

    Wang, Chengjun; Sim, Kyoseung; Chen, Jin; Kim, Hojin; Rao, Zhoulyu; Li, Yuhang; Chen, Weiqiu; Song, Jizhou; Verduzco, Rafael; Yu, Cunjiang

    2018-03-01

    Soft robots outperform the conventional hard robots on significantly enhanced safety, adaptability, and complex motions. The development of fully soft robots, especially fully from smart soft materials to mimic soft animals, is still nascent. In addition, to date, existing soft robots cannot adapt themselves to the surrounding environment, i.e., sensing and adaptive motion or response, like animals. Here, compliant ultrathin sensing and actuating electronics innervated fully soft robots that can sense the environment and perform soft bodied crawling adaptively, mimicking an inchworm, are reported. The soft robots are constructed with actuators of open-mesh shaped ultrathin deformable heaters, sensors of single-crystal Si optoelectronic photodetectors, and thermally responsive artificial muscle of carbon-black-doped liquid-crystal elastomer (LCE-CB) nanocomposite. The results demonstrate that adaptive crawling locomotion can be realized through the conjugation of sensing and actuation, where the sensors sense the environment and actuators respond correspondingly to control the locomotion autonomously through regulating the deformation of LCE-CB bimorphs and the locomotion of the robots. The strategy of innervating soft sensing and actuating electronics with artificial muscles paves the way for the development of smart autonomous soft robots. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  13. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  14. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  15. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  16. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  17. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  18. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  19. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  20. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)