WorldWideScience

Sample records for soft error rate

  1. Evaluation of soft errors rate in a commercial memory EEPROM

    International Nuclear Information System (INIS)

    Claro, Luiz H.; Silva, A.A.; Santos, Jose A.

    2011-01-01

    Soft errors are transient circuit errors caused by external radiation. When an ion intercepts a p-n region in an electronic component, the ionization produces excess charges along the track. These charges when collected can flip internal values, especially in memory cells. The problem affects not only space application but also terrestrial ones. Neutrons induced by cosmic rays and alpha particles, emitted from traces of radioactive contaminants contained in packaging and chip materials, are the predominant sources of radiation. The soft error susceptibility is different for different memory technology hence the experimental study are very important for Soft Error Rate (SER) evaluation. In this work, the methodology for accelerated tests is presented with the results for SER in a commercial electrically erasable and programmable read-only memory (EEPROM). (author)

  2. Accelerated testing for cosmic soft-error rate

    International Nuclear Information System (INIS)

    Ziegler, J.F.; Muhlfeld, H.P.; Montrose, C.J.; Curtis, H.W.; O'Gorman, T.J.; Ross, J.M.

    1996-01-01

    This paper describes the experimental techniques which have been developed at IBM to determine the sensitivity of electronic circuits to cosmic rays at sea level. It relates IBM circuit design and modeling, chip manufacture with process variations, and chip testing for SER sensitivity. This vertical integration from design to final test and with feedback to design allows a complete picture of LSI sensitivity to cosmic rays. Since advanced computers are designed with LSI chips long before the chips have been fabricated, and the system architecture is fully formed before the first chips are functional, it is essential to establish the chip reliability as early as possible. This paper establishes techniques to test chips that are only partly functional (e.g., only 1Mb of a 16Mb memory may be working) and can establish chip soft-error upset rates before final chip manufacturing begins. Simple relationships derived from measurement of more than 80 different chips manufactured over 20 years allow total cosmic soft-error rate (SER) to be estimated after only limited testing. Comparisons between these accelerated test results and similar tests determined by ''field testing'' (which may require a year or more of testing after manufacturing begins) show that the experimental techniques are accurate to a factor of 2

  3. Soft error rate analysis methodology of multi-Pulse-single-event transients

    International Nuclear Information System (INIS)

    Zhou Bin; Huo Mingxue; Xiao Liyi

    2012-01-01

    As transistor feature size scales down, soft errors in combinational logic because of high-energy particle radiation is gaining more and more concerns. In this paper, a combinational logic soft error analysis methodology considering multi-pulse-single-event transients (MPSETs) and re-convergence with multi transient pulses is proposed. In the proposed approach, the voltage pulse produced at the standard cell output is approximated by a triangle waveform, and characterized by three parameters: pulse width, the transition time of the first edge, and the transition time of the second edge. As for the pulse with the amplitude being smaller than the supply voltage, the edge extension technique is proposed. Moreover, an efficient electrical masking model comprehensively considering transition time, delay, width and amplitude is proposed, and an approach using the transition times of two edges and pulse width to compute the amplitude of pulse is proposed. Finally, our proposed firstly-independently-propagating-secondly-mutually-interacting (FIP-SMI) is used to deal with more practical re-convergence gate with multi transient pulses. As for MPSETs, a random generation model of MPSETs is exploratively proposed. Compared to the estimates obtained using circuit level simulations by HSpice, our proposed soft error rate analysis algorithm has 10% errors in SER estimation with speed up of 300 when the single-pulse-single-event transient (SPSET) is considered. We have also demonstrated the runtime and SER decrease with the increment of P0 using designs from the ISCAS-85 benchmarks. (authors)

  4. A Simulation-Based Soft Error Estimation Methodology for Computer Systems

    OpenAIRE

    Sugihara, Makoto; Ishihara, Tohru; Hashimoto, Koji; Muroyama, Masanori

    2006-01-01

    This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates th...

  5. Terrestrial neutron-induced soft errors in advanced memory devices

    CERN Document Server

    Nakamura, Takashi; Ibe, Eishi; Yahagi, Yasuo; Kameyama, Hideaki

    2008-01-01

    Terrestrial neutron-induced soft errors in semiconductor memory devices are currently a major concern in reliability issues. Understanding the mechanism and quantifying soft-error rates are primarily crucial for the design and quality assurance of semiconductor memory devices. This book covers the relevant up-to-date topics in terrestrial neutron-induced soft errors, and aims to provide succinct knowledge on neutron-induced soft errors to the readers by presenting several valuable and unique features. Sample Chapter(s). Chapter 1: Introduction (238 KB). Table A.30 mentioned in Appendix A.6 on

  6. Modeling the cosmic-ray-induced soft-error rate in integrated circuits: An overview

    International Nuclear Information System (INIS)

    Srinivasan, G.R.

    1996-01-01

    This paper is an overview of the concepts and methodologies used to predict soft-error rates (SER) due to cosmic and high-energy particle radiation in integrated circuit chips. The paper emphasizes the need for the SER simulation using the actual chip circuit model which includes device, process, and technology parameters as opposed to using either the discrete device simulation or generic circuit simulation that is commonly employed in SER modeling. Concepts such as funneling, event-by-event simulation, nuclear history files, critical charge, and charge sharing are examined. Also discussed are the relative importance of elastic and inelastic nuclear collisions, rare event statistics, and device vs. circuit simulations. The semi-empirical methodologies used in the aerospace community to arrive at SERs [also referred to as single-event upset (SEU) rates] in integrated circuit chips are reviewed. This paper is one of four in this special issue relating to SER modeling. Together, they provide a comprehensive account of this modeling effort, which has resulted in a unique modeling tool called the Soft-Error Monte Carlo Model, or SEMM

  7. Alpha-particle-induced soft errors in high speed bipolar RAM

    International Nuclear Information System (INIS)

    Mitsusada, Kazumichi; Kato, Yukio; Yamaguchi, Kunihiko; Inadachi, Masaaki

    1980-01-01

    As bipolar RAM (Random Access Memory) has been improved to a fast acting and highly integrated device, the problems negligible in the past have become the ones that can not be ignored. The problem of a-particles emitted from the radioactive substances in semiconductor package materials should be specifically noticed, which cause soft errors. The authors have produced experimentally the special 1 kbit bipolar RAM to investigate its soft errors. The package used was the standard 16 pin dual in-line type, with which the practical system mounting test and a-particle irradiation test have been performed. The results showed the occurrence of soft errors at the average rate of about 1 bit/700 device hour. It is concluded that the cause was due to the a-particles emitted from the package materials, and at the same time, it was found that the rate of soft error occurrence was able to be greatly reduced by shielding a-particles. The error rate significantly increased with the decrease of the stand-by current of memory cells and with the accumulated charge determined by time constant. The mechanism of soft error was also investigated, for which an approximate model to estimate the error rate by means of the effective noise charge due to a-particles and of the amount of reversible charges of memory cells is shown to compare it with the experimental results. (Wakatsuki, Y.)

  8. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  9. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    Science.gov (United States)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  10. Neutron-induced soft errors in CMOS circuits

    International Nuclear Information System (INIS)

    Hazucha, P.

    1999-01-01

    The subject of this thesis is a systematic study of soft errors occurring in CMOS integrated circuits when being exposed to radiation. The vast majority of commercial circuits operate in the natural environment ranging from the sea level to aircraft flight altitudes (less than 20 km), where the errors are caused mainly by interaction of atmospheric neutrons with silicon. Initially, the soft error rate (SER) of a static memory was measured for supply voltages from 2V to 5V when irradiated by 14 MeV and 100 MeV neutrons. Increased error rate due to the decreased supply voltage has been identified as a potential hazard for operation of future low-voltage circuits. A novel methodology was proposed for accurate SER characterization of a manufacturing process and it was validated by measurements on a 0.6 μm process and 100 MeV neutrons. The methodology can be applied to the prediction of SER in the natural environment

  11. An Investigation into Soft Error Detection Efficiency at Operating System Level

    OpenAIRE

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and soft...

  12. An investigation into soft error detection efficiency at operating system level.

    Science.gov (United States)

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  13. An Investigation into Soft Error Detection Efficiency at Operating System Level

    Directory of Open Access Journals (Sweden)

    Seyyed Amir Asghari

    2014-01-01

    Full Text Available Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  14. Field testing for cosmic ray soft errors in semiconductor memories

    International Nuclear Information System (INIS)

    O'Gorman, T.J.; Ross, J.M.; Taber, A.H.; Ziegler, J.F.; Muhlfeld, H.P.; Montrose, C.J.; Curtis, H.W.; Walsh, J.L.

    1996-01-01

    This paper presents a review of experiments performed by IBM to investigate the causes of soft errors in semiconductor memory chips under field test conditions. The effects of alpha-particles and cosmic rays are separated by comparing multiple measurements of the soft-error rate (SER) of samples of memory chips deep underground and at various altitudes above the earth. The results of case studies on four different memory chips show that cosmic rays are an important source of the ionizing radiation that causes soft errors. The results of field testing are used to confirm the accuracy of the modeling and the accelerated testing of chips

  15. Soft error mechanisms, modeling and mitigation

    CERN Document Server

    Sayil, Selahattin

    2016-01-01

    This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption an...

  16. Soft errors in modern electronic systems

    CERN Document Server

    Nicolaidis, Michael

    2010-01-01

    This book provides a comprehensive presentation of the most advanced research results and technological developments enabling understanding, qualifying and mitigating the soft errors effect in advanced electronics, including the fundamental physical mechanisms of radiation induced soft errors, the various steps that lead to a system failure, the modelling and simulation of soft error at various levels (including physical, electrical, netlist, event driven, RTL, and system level modelling and simulation), hardware fault injection, accelerated radiation testing and natural environment testing, s

  17. A Quatro-Based 65-nm Flip-Flop Circuit for Soft-Error Resilience

    Science.gov (United States)

    Li, Y.-Q.; Wang, H.-B.; Liu, R.; Chen, L.; Nofal, I.; Shi, S.-T.; He, A.-L.; Guo, G.; Baeg, S. H.; Wen, S.-J.; Wong, R.; Chen, M.; Wu, Q.

    2017-06-01

    A flip-flop circuit hardened against soft errors is presented in this paper. This design is an improved version of Quatro for further enhanced soft-error resilience by integrating the guard-gate technique. The proposed design, as well as reference Quatro and regular flip-flops, was implemented and manufactured in a 65-nm CMOS bulk technology. Experimental characterization results of their alpha and heavy ions soft-error rates verified the superior hardening performance of the proposed design over the other two circuits.

  18. SCHEME (Soft Control Human error Evaluation MEthod) for advanced MCR HRA

    International Nuclear Information System (INIS)

    Jang, Inseok; Jung, Wondea; Seong, Poong Hyun

    2015-01-01

    The Technique for Human Error Rate Prediction (THERP), Korean Human Reliability Analysis (K-HRA), Human Error Assessment and Reduction Technique (HEART), A Technique for Human Event Analysis (ATHEANA), Cognitive Reliability and Error Analysis Method (CREAM), and Simplified Plant Analysis Risk Human Reliability Assessment (SPAR-H) in relation to NPP maintenance and operation. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). They are still used for HRA in advanced MCRs even though the operating environment of advanced MCRs in NPPs has been considerably changed by the adoption of new human-system interfaces such as computer-based soft controls. Among the many features in advanced MCRs, soft controls are an important feature because the operation action in NPP advanced MCRs is performed by soft controls. Consequently, those conventional methods may not sufficiently consider the features of soft control execution human errors. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing the soft control task analysis and the literature reviews regarding widely accepted human error taxonomies. In this study, the framework of a HRA method for evaluating soft control execution human error in advanced MCRs is developed. First, the factors which HRA method in advanced MCRs should encompass are derived based on the literature review, and soft control task analysis. Based on the derived factors, execution HRA framework in advanced MCRs is developed mainly focusing on the features of soft control. Moreover, since most current HRA database deal with operation in conventional type of MCRs and are not explicitly designed to deal with digital HSI, HRA database are developed under lab scale simulation

  19. Modelling and mitigation of soft-errors in CMOS processors

    NARCIS (Netherlands)

    Rohani, A.

    2014-01-01

    The topic of this thesis is about soft-errors in digital systems. Different aspects of soft-errors have been addressed here, including an accurate simulation model to emulate soft-errors in a gate-level net list, a simulation framework to study the impact of soft-errors in a VHDL design and an

  20. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  1. Calculation of the soft error rate of submicron CMOS logic circuits

    International Nuclear Information System (INIS)

    Juhnke, T.; Klar, H.

    1995-01-01

    A method to calculate the soft error rate (SER) of CMOS logic circuits with dynamic pipeline registers is described. This method takes into account charge collection by drift and diffusion. The method is verified by comparison of calculated SER's to measurement results. Using this method, the SER of a highly pipelined multiplier is calculated as a function of supply voltage for a 0.6 microm, 0.3 microm, and 0.12 microm technology, respectively. It has been found that the SER of such highly pipelined submicron CMOS circuits may become too high so that countermeasures have to be taken. Since the SER greatly increases with decreasing supply voltage, low-power/low-voltage circuits may show more than eight times the SER for half the normal supply voltage as compared to conventional designs

  2. Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment

    Science.gov (United States)

    Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.

    2016-11-01

    This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.

  3. Soft error evaluation in SRAM using α sources

    International Nuclear Information System (INIS)

    He Chaohui; Chu Jun; Ren Xueming; Xia Chunmei; Yang Xiupei; Zhang Weiwei; Wang Hongquan; Xiao Jiangbo; Li Xiaolin

    2006-01-01

    Soft errors in memories influence directly the reliability of products. To compare the ability of three different memories against soft errors by experiments of alpha particles irradiation, the numbers of soft errors are measured for three different SRAMs and the cross sections of single event upset (SEU) and failures in time (FIT) are calculated. According to the cross sections of SEU, the ability of A166M against soft errors is the best and then B166M, the last B200M. The average FIT of B166M is smaller than that of B200M, and that of A166M is the biggest among them. (authors)

  4. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    Energy Technology Data Exchange (ETDEWEB)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan, E-mail: liushuhuan@mail.xjtu.edu.cn; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-21

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  5. Soft errors in dynamic random access memories - a basis for dosimetry

    International Nuclear Information System (INIS)

    Haque, A.K.M.M.; Yates, J.; Stevens, D.

    1986-01-01

    The soft error rates of a number of 64k and 256k dRAMs from several manufacturers have been measured, employing a MC 68000 microprocessor. For this 'accelerated test' procedure, a 37 kBq (1 μCi) 241 Am alpha emitting source was used. Both 64k and 256k devices exhibited widely differing error rates. It was generally observed that the spread of errors over a particular device/manufacturer was much smaller than the differences between device families and manufacturers. Bit line errors formed a significant part of the total for 64k dRAMs, whereas in 256k dRAMs cell errors dominated; the latter also showed an enhanced sensitivity to integrated dose leading to total failure, and a time-dependent recovery. Although several theoretical models explain soft error mechanisms and predict responses which are compatible with our experimental results, it is considered that microdosimetric and track structure methods should be applied to the problem for its better appreciation. Finally, attention is drawn to the need for further studies of dRAMs, with a view to their use as digital dosemeters. (author)

  6. Quantitative estimation of the human error probability during soft control operations

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jung, Wondea

    2013-01-01

    Highlights: ► An HRA method to evaluate execution HEP for soft control operations was proposed. ► The soft control tasks were analyzed and design-related influencing factors were identified. ► An application to evaluate the effects of soft controls was performed. - Abstract: In this work, a method was proposed for quantifying human errors that can occur during operation executions using soft controls. Soft controls of advanced main control rooms have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to identify the human error modes and quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests an evaluation framework for quantifying the execution error probability using soft controls. In the application result, it was observed that the human error probabilities of soft controls showed both positive and negative results compared to the conventional controls according to the design quality of advanced main control rooms

  7. Formal Analysis of Soft Errors using Theorem Proving

    Directory of Open Access Journals (Sweden)

    Sofiène Tahar

    2013-07-01

    Full Text Available Modeling and analysis of soft errors in electronic circuits has traditionally been done using computer simulations. Computer simulations cannot guarantee correctness of analysis because they utilize approximate real number representations and pseudo random numbers in the analysis and thus are not well suited for analyzing safety-critical applications. In this paper, we present a higher-order logic theorem proving based method for modeling and analysis of soft errors in electronic circuits. Our developed infrastructure includes formalized continuous random variable pairs, their Cumulative Distribution Function (CDF properties and independent standard uniform and Gaussian random variables. We illustrate the usefulness of our approach by modeling and analyzing soft errors in commonly used dynamic random access memory sense amplifier circuits.

  8. Alpha particle induced soft errors in NMOS RAMs: a review

    International Nuclear Information System (INIS)

    Carter, P.M.; Wilkins, B.R.

    1987-01-01

    The paper aims to explain the alpha particle induced soft error phenomenon using the NMOS dynamic random access memory (RAM) as a model. It discusses some of the many techniques experimented with by manufacturers to overcome the problem, and gives a review of the literature covering most aspects of soft errors in dynamic RAMs. Finally, the soft error performance of current dynamic RAM and static RAM products from several manufacturers are compared. (author)

  9. A Case for Soft Error Detection and Correction in Computational Chemistry.

    Science.gov (United States)

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  10. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    Energy Technology Data Exchange (ETDEWEB)

    Aljneibi, Hanan Salah Ali [Khalifa Univ., Abu Dhabi (United Arab Emirates); Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2015-10-15

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation.

  11. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    International Nuclear Information System (INIS)

    Aljneibi, Hanan Salah Ali; Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun

    2015-01-01

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation

  12. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Wei He

    2016-01-01

    Full Text Available Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main parameters for raw soft error vulnerability of the module and coupling factors. Results indicate that the proposed method is feasible.

  13. Basic human error probabilities in advanced MCRs when using soft control

    International Nuclear Information System (INIS)

    Jang, In Seok; Seong, Poong Hyun; Kang, Hyun Gook; Lee, Seung Jun

    2012-01-01

    In a report on one of the renowned HRA methods, Technique for Human Error Rate Prediction (THERP), it is pointed out that 'The paucity of actual data on human performance continues to be a major problem for estimating HEPs and performance times in nuclear power plant (NPP) task'. However, another critical difficulty is that most current HRA databases deal with operation in conventional type of MCRs. With the adoption of new human system interfaces that are based on computer based technologies, the operation environment of MCRs in NPPs has changed. The MCRs including these digital and computer technologies, such as large display panels, computerized procedures, soft controls, and so on, are called advanced MCRs. Because of the different interfaces, different Basic Human Error Probabilities (BHEPs) should be considered in human reliability analyses (HRAs) for advanced MCRs. This study carries out an empirical analysis of human error considering soft controls. The aim of this work is not only to compile a database using the simulator for advanced MCRs but also to compare BHEPs with those of a conventional MCR database

  14. Soft error modeling and analysis of the Neutron Intercepting Silicon Chip (NISC)

    International Nuclear Information System (INIS)

    Celik, Cihangir; Unlue, Kenan; Narayanan, Vijaykrishnan; Irwin, Mary J.

    2011-01-01

    Soft errors are transient errors caused due to excess charge carriers induced primarily by external radiations in the semiconductor devices. Soft error phenomena could be used to detect thermal neutrons with a neutron monitoring/detection system by enhancing soft error occurrences in the memory devices. This way, one can convert all semiconductor memory devices into neutron detection systems. Such a device is being developed at The Pennsylvania State University and named Neutron Intercepting Silicon Chip (NISC). The NISC is envisioning a miniature, power efficient, and active/passive operation neutron sensor/detector system. NISC aims to achieve this goal by introducing 10 B-enriched Borophosphosilicate Glass (BPSG) insulation layers in the semiconductor memories. In order to model and analyze the NISC, an analysis tool using Geant4 as the transport and tracking engine is developed for the simulation of the charged particle interactions in the semiconductor memory model, named NISC Soft Error Analysis Tool (NISCSAT). A simple model with 10 B-enriched layer on top of the lumped silicon region is developed in order to represent the semiconductor memory node. Soft error probability calculations were performed via the NISCSAT with both single node and array configurations to investigate device scaling by using different node dimensions in the model. Mono-energetic, mono-directional thermal and fast neutrons are used as the neutron sources. Soft error contribution due to the BPSG layer is also investigated with different 10 B contents and the results are presented in this paper.

  15. Human error mode identification for NPP main control room operations using soft controls

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jang, Seung-Cheol

    2011-01-01

    The operation environment of main control rooms (MCRs) in modern nuclear power plants (NPPs) has considerably changed over the years. Advanced MCRs, which have been designed by adapting digital and computer technologies, have simpler interfaces using large display panels, computerized displays, soft controls, computerized procedure systems, and so on. The actions for the NPP operations are performed using soft controls in advanced MCRs. Soft controls have different features from conventional controls. Operators need to navigate the screens to find indicators and controls and manipulate controls using a mouse, touch screens, and so on. Due to these different interfaces, different human errors should be considered in the human reliability analysis (HRA) for advanced MCRs. In this work, human errors that could occur during operation executions using soft controls were analyzed. This work classified the human errors in soft controls into six types, and the reasons that affect the occurrence of the human errors were also analyzed. (author)

  16. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    OpenAIRE

    He, Wei; Wang, Yueke; Xing, Kefei; Yang, Jianwei

    2016-01-01

    Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main paramet...

  17. An empirical study on the basic human error probabilities for NPP advanced main control room operation using soft control

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Harbi, Mohamed Ali Salem Al; Lee, Seung Jun; Kang, Hyun Gook; Seong, Poong Hyun

    2013-01-01

    Highlights: ► The operation environment of MCRs in NPPs has changed by adopting new HSIs. ► The operation action in NPP Advanced MCRs is performed by soft control. ► Different basic human error probabilities (BHEPs) should be considered. ► BHEPs in a soft control operation environment are investigated empirically. ► This work will be helpful to verify if soft control has positive or negative effects. -- Abstract: By adopting new human–system interfaces that are based on computer-based technologies, the operation environment of main control rooms (MCRs) in nuclear power plants (NPPs) has changed. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, soft controls, and so on, are called Advanced MCRs. Among the many features in Advanced MCRs, soft controls are an important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, touch screens, and so on, operators can select a specific screen, then choose the controller, and finally manipulate the devices. However, because of the different interfaces between soft control and hardwired conventional type control, different basic human error probabilities (BHEPs) should be considered in the Human Reliability Analysis (HRA) for advanced MCRs. Although there are many HRA methods to assess human reliabilities, such as Technique for Human Error Rate Prediction (THERP), Accident Sequence Evaluation Program (ASEP), Human Error Assessment and Reduction Technique (HEART), Human Event Repository and Analysis (HERA), Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR), Cognitive Reliability and Error Analysis Method (CREAM), and so on, these methods have been applied to conventional MCRs, and they do not consider the new features of advance MCRs such as soft controls. As a result, there is an insufficient database for assessing human reliabilities in advanced

  18. When soft controls get slippery: User interfaces and human error

    International Nuclear Information System (INIS)

    Stubler, W.F.; O'Hara, J.M.

    1998-01-01

    Many types of products and systems that have traditionally featured physical control devices are now being designed with soft controls--input formats appearing on computer-based display devices and operated by a variety of input devices. A review of complex human-machine systems found that soft controls are particularly prone to some types of errors and may affect overall system performance and safety. This paper discusses the application of design approaches for reducing the likelihood of these errors and for enhancing usability, user satisfaction, and system performance and safety

  19. An Analysis and Quantification Method of Human Errors of Soft Controls in Advanced MCRs

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jae Whan; Jang, Seung Cheol

    2011-01-01

    In this work, a method was proposed for quantifying human errors that may occur during operation executions using soft control. Soft controls of advanced main control rooms (MCRs) have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to define the human error modes and to quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests a modified K-HRA method for quantifying error probability

  20. Soft errors from particles to circuits

    CERN Document Server

    Autran, Jean-Luc

    2015-01-01

    ""Soft Errors: From Particles to Circuits covers all aspects of the design, use, application, performance, and testing of parts, devices, and systems and addresses every perspective from an engineering, scientific, or physical point of view. … Many good texts have been written on similar subjects, but none as thorough, as clear, and as complete as this volume. … [The authors] have mastered the past, absorbed the present, and captured the trends of the future in one of the most important technologies of our time. … An extremely useful text that has succeeded in presenting wit

  1. A Survey of Soft-Error Mitigation Techniques for Non-Volatile Memories

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2017-02-01

    Full Text Available Non-volatile memories (NVMs offer superior density and energy characteristics compared to the conventional memories; however, NVMs suffer from severe reliability issues that can easily eclipse their energy efficiency advantages. In this paper, we survey architectural techniques for improving the soft-error reliability of NVMs, specifically PCM (phase change memory and STT-RAM (spin transfer torque RAM. We focus on soft-errors, such as resistance drift and write disturbance, in PCM and read disturbance and write failures in STT-RAM. By classifying the research works based on key parameters, we highlight their similarities and distinctions. We hope that this survey will underline the crucial importance of addressing NVM reliability for ensuring their system integration and will be useful for researchers, computer architects and processor designers.

  2. A Feasibility Study for Measuring Accurate Chest Compression Depth and Rate on Soft Surfaces Using Two Accelerometers and Spectral Analysis

    Directory of Open Access Journals (Sweden)

    Sofía Ruiz de Gauna

    2016-01-01

    Full Text Available Background. Cardiopulmonary resuscitation (CPR feedback devices are being increasingly used. However, current accelerometer-based devices overestimate chest displacement when CPR is performed on soft surfaces, which may lead to insufficient compression depth. Aim. To assess the performance of a new algorithm for measuring compression depth and rate based on two accelerometers in a simulated resuscitation scenario. Materials and Methods. Compressions were provided to a manikin on two mattresses, foam and sprung, with and without a backboard. One accelerometer was placed on the chest and the second at the manikin’s back. Chest displacement and mattress displacement were calculated from the spectral analysis of the corresponding acceleration every 2 seconds and subtracted to compute the actual sternal-spinal displacement. Compression rate was obtained from the chest acceleration. Results. Median unsigned error in depth was 2.1 mm (4.4%. Error was 2.4 mm in the foam and 1.7 mm in the sprung mattress (p<0.001. Error was 3.1/2.0 mm and 1.8/1.6 mm with/without backboard for foam and sprung, respectively (p<0.001. Median error in rate was 0.9 cpm (1.0%, with no significant differences between test conditions. Conclusion. The system provided accurate feedback on chest compression depth and rate on soft surfaces. Our solution compensated mattress displacement, avoiding overestimation of compression depth when CPR is performed on soft surfaces.

  3. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    Science.gov (United States)

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  4. Study on a new framework of Human Reliability Analysis to evaluate soft control execution error in advanced MCRs of NPPs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2016-01-01

    Highlights: • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • The operation action in NPP Advanced MCRs is performed by soft control. • New HRA framework should be considered in the HRA for advanced MCRs. • HRA framework for evaluation of soft control execution human error is suggested. • Suggested method will be helpful to analyze human reliability in advance MCRs. - Abstract: Since the Three Mile Island (TMI)-2 accident, human error has been recognized as one of the main causes of Nuclear Power Plant (NPP) accidents, and numerous studies related to Human Reliability Analysis (HRA) have been carried out. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). However, the operating environment of MCRs in NPPs has changed with the adoption of new Human-System Interfaces (HSIs) that are based on computer-based technologies. The MCRs that include these digital technologies, such as large display panels, computerized procedures, and soft controls, are called advanced MCRs. Among the many features of advanced MCRs, soft controls are a particularly important feature because operating actions in NPP advanced MCRs are performed by soft control. Due to the differences in interfaces between soft control and hardwired conventional type control, different Human Error Probabilities (HEPs) and a new HRA framework should be considered in the HRA for advanced MCRs. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing a soft control task analysis and the literature regarding widely accepted human error taxonomies is reviewed. Moreover, since most current HRA databases deal with operation in conventional MCRs and are not explicitly designed to deal with digital HSIs, empirical analysis of human error and error recovery considering soft controls under an advanced MCR mockup are carried out to collect human error data, which is

  5. Low delay and area efficient soft error correction in arbitration logic

    Science.gov (United States)

    Sugawara, Yutaka

    2013-09-10

    There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.

  6. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  7. A Physics-Based Engineering Methodology for Calculating Soft Error Rates of Bulk CMOS and SiGe Heterojunction Bipolar Transistor Integrated Circuits

    Science.gov (United States)

    Fulkerson, David E.

    2010-02-01

    This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.

  8. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    Directory of Open Access Journals (Sweden)

    Wei He

    Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.

  9. FPGAs and parallel architectures for aerospace applications soft errors and fault-tolerant design

    CERN Document Server

    Rech, Paolo

    2016-01-01

    This book introduces the concepts of soft errors in FPGAs, as well as the motivation for using commercial, off-the-shelf (COTS) FPGAs in mission-critical and remote applications, such as aerospace.  The authors describe the effects of radiation in FPGAs, present a large set of soft-error mitigation techniques that can be applied in these circuits, as well as methods for qualifying these circuits under radiation.  Coverage includes radiation effects in FPGAs, fault-tolerant techniques for FPGAs, use of COTS FPGAs in aerospace applications, experimental data of FPGAs under radiation, FPGA embedded processors under radiation, and fault injection in FPGAs. Since dedicated parallel processing architectures such as GPUs have become more desirable in aerospace applications due to high computational power, GPU analysis under radiation is also discussed. ·         Discusses features and drawbacks of reconfigurability methods for FPGAs, focused on aerospace applications; ·         Explains how radia...

  10. Soft Decision Analyzer

    Science.gov (United States)

    Lansdowne, Chatwin; Steele, Glen; Zucha, Joan; Schlesinger, Adam

    2013-01-01

    We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.

  11. Neutron detection using soft errors in dynamic Random Access Memories

    International Nuclear Information System (INIS)

    Darambara, D.G.; Spyrou, N.M.

    1994-01-01

    The purpose of this paper is to present results from experiments that have been performed to show the memory cycle time dependence of the soft errors produced by the interaction of alpha particles with dynamic random access memory devices, with a view to using these as position sensitive detectors. Furthermore, a preliminary feasibility study being carried out indicates the use of dynamic RAMs as neutron detectors by the utilization of (n, α) capture reactions in a Li converter placed on the top of the active area of the memory chip. ((orig.))

  12. Radiation effects and soft errors in integrated circuits and electronic devices

    CERN Document Server

    Fleetwood, D M

    2004-01-01

    This book provides a detailed treatment of radiation effects in electronic devices, including effects at the material, device, and circuit levels. The emphasis is on transient effects caused by single ionizing particles (single-event effects and soft errors) and effects produced by the cumulative energy deposited by the radiation (total ionizing dose effects). Bipolar (Si and SiGe), metal-oxide-semiconductor (MOS), and compound semiconductor technologies are discussed. In addition to considering the specific issues associated with high-performance devices and technologies, the book includes th

  13. A software solution to estimate the SEU-induced soft error rate for systems implemented on SRAM-based FPGAs

    International Nuclear Information System (INIS)

    Wang Zhongming; Lu Min; Yao Zhibin; Guo Hongxia

    2011-01-01

    SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach. (semiconductor integrated circuits)

  14. Logical error rate scaling of the toric code

    International Nuclear Information System (INIS)

    Watson, Fern H E; Barrett, Sean D

    2014-01-01

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)

  15. 45 CFR 98.100 - Error Rate Report.

    Science.gov (United States)

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... the total dollar amount of payments made in the sample); the average amount of improper payment; and... not received. (e) Costs of Preparing the Error Rate Report—Provided the error rate calculations and...

  16. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  17. Human error recovery failure probability when using soft controls in computerized control rooms

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Seong, Poong Hyun; Jung, Wondea

    2014-01-01

    Many literatures categorized recovery process into three phases; detection of problem situation, explanation of problem causes or countermeasures against problem, and end of recovery. Although the focus of recovery promotion has been on categorizing recovery phases and modeling recovery process, research related to human recovery failure probabilities has not been perform actively. On the other hand, a few study regarding recovery failure probabilities were implemented empirically. Summarizing, researches that have performed so far have several problems in terms of use in human reliability analysis (HRA). By adopting new human-system interfaces that are based on computer-based technologies, the operation environment of MCRs in NPPs has changed from conventional MCRs to advanced MCRs. Because of the different interfaces between conventional and advanced MCRs, different recovery failure probabilities should be considered in the HRA for advanced MCRs. Therefore, this study carries out an empirical analysis of human error recovery probabilities under an advanced MCR mockup called compact nuclear simulator (CNS). The aim of this work is not only to compile a recovery failure probability database using the simulator for advanced MCRs but also to collect recovery failure probability according to defined human error modes to compare that which human error mode has highest recovery failure probability. The results show that recovery failure probability regarding wrong screen selection was lowest among human error modes, which means that most of human error related to wrong screen selection can be recovered. On the other hand, recovery failure probabilities of operation selection omission and delayed operation were 1.0. These results imply that once subject omitted one task in the procedure, they have difficulties finding and recovering their errors without supervisor's assistance. Also, wrong screen selection had an effect on delayed operation. That is, wrong screen

  18. Soft factors have an empirically testifiable effect on rating grade

    Directory of Open Access Journals (Sweden)

    Thomas Laufer

    2011-01-01

    Full Text Available The conclusions herein contain the summary of the results of an empirical survey in proof of the effects of soft factors on corporate rating grade.In the effort, three different software applications have been used. By means of the applications, the soft factors in corporate ratings previously identified in a related effort have been assessed for their impacts. That means all other applicable soft factors have been treated in a neutral manner.As a result based on assessments supplied by the three applications, weighted effect has been determined of soft factors, allowing to compile priority charts for the deployment of the factors as a targeted marketing tool. The charts also include the respective positive and a negative effects of hard factors.

  19. High strain-rate soft material characterization via inertial cavitation

    Science.gov (United States)

    Estrada, Jonathan B.; Barajas, Carlos; Henann, David L.; Johnsen, Eric; Franck, Christian

    2018-03-01

    Mechanical characterization of soft materials at high strain-rates is challenging due to their high compliance, slow wave speeds, and non-linear viscoelasticity. Yet, knowledge of their material behavior is paramount across a spectrum of biological and engineering applications from minimizing tissue damage in ultrasound and laser surgeries to diagnosing and mitigating impact injuries. To address this significant experimental hurdle and the need to accurately measure the viscoelastic properties of soft materials at high strain-rates (103-108 s-1), we present a minimally invasive, local 3D microrheology technique based on inertial microcavitation. By combining high-speed time-lapse imaging with an appropriate theoretical cavitation framework, we demonstrate that this technique has the capability to accurately determine the general viscoelastic material properties of soft matter as compliant as a few kilopascals. Similar to commercial characterization algorithms, we provide the user with significant flexibility in evaluating several constitutive laws to determine the most appropriate physical model for the material under investigation. Given its straightforward implementation into most current microscopy setups, we anticipate that this technique can be easily adopted by anyone interested in characterizing soft material properties at high loading rates including hydrogels, tissues and various polymeric specimens.

  20. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    Energy Technology Data Exchange (ETDEWEB)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross [Department of Radiation Oncology, Brigham and Women' s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States)

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID

  1. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    International Nuclear Information System (INIS)

    Yip, Stephen; Rottmann, Joerg; Berbeco, Ross

    2014-01-01

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID

  2. Discrete polyphase matched filtering-based soft timing estimation for mobile wireless systems

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft...

  3. Human error and the associated recovery probabilities for soft control being used in the advanced MCRs of NPPs

    International Nuclear Information System (INIS)

    Jang, Inseok; Jung, Wondea; Seong, Poong Hyun

    2016-01-01

    Highlights: • The operation environment of MCRs in NPPs has changed by adopting digital HSIs. • Most current HRA databases are not explicitly designed to deal with digital HSI. • Empirical analysis for new HRA DB under an advanced MCR mockup are carried. • It is expected that the results can be used for advanced MCR HRA. - Abstract: Since the Three Mile Island (TMI)-2 accident, human error has been recognized as one of the main causes of Nuclear Power Plant (NPP) accidents, and numerous studies related to Human Reliability Analysis (HRA) have been carried out. Most of these studies were focused on considering the conventional Main Control Room (MCR) environment. However, the operating environment of MCRs in NPPs has changed with the adoption of new human-system interfaces (HSI) largely based on up-to-date digital technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called advanced MCRs. Among the many features of advanced MCRs, soft controls are a particularly important because operating actions in advanced MCRs are performed by soft control. Due to the difference in interfaces between soft control and hardwired conventional controls, different HEP should be used in the HRA for advanced MCRs. Unfortunately, most current HRA databases deal with operations in conventional MCRs and are not explicitly designed to deal with digital Human System Interface (HSI). For this reason, empirical human error and the associated error recovery probabilities were collected from the mockup of an advanced MCR equipped with soft controls. To this end, small-scaled experiments are conducted with 48 graduated students in the department of nuclear engineering in Korea Advanced Institute of Science and Technology (KAIST) are participated, and accident scenarios are designed with respect to the typical Design Basis Accidents (DBAs) in NPPs, such as Steam Generator Tube Rupture

  4. A statistical approach to estimating effects of performance shaping factors on human error probabilities of soft controls

    International Nuclear Information System (INIS)

    Kim, Yochan; Park, Jinkyun; Jung, Wondea; Jang, Inseok; Hyun Seong, Poong

    2015-01-01

    Despite recent efforts toward data collection for supporting human reliability analysis, there remains a lack of empirical basis in determining the effects of performance shaping factors (PSFs) on human error probabilities (HEPs). To enhance the empirical basis regarding the effects of the PSFs, a statistical methodology using a logistic regression and stepwise variable selection was proposed, and the effects of the PSF on HEPs related with the soft controls were estimated through the methodology. For this estimation, more than 600 human error opportunities related to soft controls in a computerized control room were obtained through laboratory experiments. From the eight PSF surrogates and combinations of these variables, the procedure quality, practice level, and the operation type were identified as significant factors for screen switch and mode conversion errors. The contributions of these significant factors to HEPs were also estimated in terms of a multiplicative form. The usefulness and limitation of the experimental data and the techniques employed are discussed herein, and we believe that the logistic regression and stepwise variable selection methods will provide a way to estimate the effects of PSFs on HEPs in an objective manner. - Highlights: • It is necessary to develop an empirical basis for the effects of the PSFs on the HEPs. • A statistical method using a logistic regression and variable selection was proposed. • The effects of PSFs on the HEPs of soft controls were empirically investigated. • The significant factors were identified and their effects were estimated

  5. An empirical study on the human error recovery failure probability when using soft controls in NPP advanced MCRs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2014-01-01

    Highlights: • Many researchers have tried to understand human recovery process or step. • Modeling human recovery process is not sufficient to be applied to HRA. • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • Recovery failure probability in a soft control operation environment is investigated. • Recovery failure probability here would be important evidence for expert judgment. - Abstract: It is well known that probabilistic safety assessments (PSAs) today consider not just hardware failures and environmental events that can impact upon risk, but also human error contributions. Consequently, the focus on reliability and performance management has been on the prevention of human errors and failures rather than the recovery of human errors. However, the recovery of human errors is as important as the prevention of human errors and failures for the safe operation of nuclear power plants (NPPs). For this reason, many researchers have tried to find a human recovery process or step. However, modeling the human recovery process is not sufficient enough to be applied to human reliability analysis (HRA), which requires human error and recovery probabilities. In this study, therefore, human error recovery failure probabilities based on predefined human error modes were investigated by conducting experiments in the operation mockup of advanced/digital main control rooms (MCRs) in NPPs. To this end, 48 subjects majoring in nuclear engineering participated in the experiments. In the experiments, using the developed accident scenario based on tasks from the standard post trip action (SPTA), the steam generator tube rupture (SGTR), and predominant soft control tasks, which are derived from the loss of coolant accident (LOCA) and the excess steam demand event (ESDE), all error detection and recovery data based on human error modes were checked with the performance sheet and the statistical analysis of error recovery/detection was then

  6. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  7. High energy hadron-induced errors in memory chips

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, R.J. [University of Colorado, Boulder, CO (United States)

    2001-09-01

    We have measured probabilities for proton, neutron and pion beams from accelerators to induce temporary or soft errors in a wide range of modern 16 Mb and 64 Mb dRAM memory chips, typical of those used in aircraft electronics. Relations among the cross sections for these particles are deduced, and failure rates for aircraft avionics due to cosmic rays are evaluated. Measurement of alpha pha particle yields from pions on aluminum, as a surrogate for silicon, indicate that these reaction products are the proximate cause of the charge deposition resulting in errors. Heavy ions can cause damage to solar panels and other components in satellites above the atmosphere, by the heavy ionization trails they leave. However, at the earth's surface or at aircraft altitude it is known that cosmic rays, other than heavy ions, can cause soft errors in memory circuit components. Soft errors are those confusions between ones and zeroes that cause wrong contents to be stored in the memory, but without causing permanent damage to the circuit. As modern aircraft rely increasingly upon computerized and automated systems, these soft errors are important threats to safety. Protons, neutrons and pions resulting from high energy cosmic ray bombardment of the atmosphere pervade our environment. These particles do not induce damage directly by their ionization loss, but rather by reactions in the materials of the microcircuits. We have measured many cross sections for soft error upsets (SEU) in a broad range of commercial 16 Mb and 64 Mb dRAMs with accelerator beams. Here we define {sigma} SEU = induced errors/number of sample bits x particles/cm{sup 2}. We compare {sigma} SEU to find relations among results for these beams, and relations to reaction cross sections in order to systematize effects. We have modelled cosmic ray effects upon the components we have studied. (Author)

  8. High energy hadron-induced errors in memory chips

    International Nuclear Information System (INIS)

    Peterson, R.J.

    2001-01-01

    We have measured probabilities for proton, neutron and pion beams from accelerators to induce temporary or soft errors in a wide range of modern 16 Mb and 64 Mb dRAM memory chips, typical of those used in aircraft electronics. Relations among the cross sections for these particles are deduced, and failure rates for aircraft avionics due to cosmic rays are evaluated. Measurement of alpha pha particle yields from pions on aluminum, as a surrogate for silicon, indicate that these reaction products are the proximate cause of the charge deposition resulting in errors. Heavy ions can cause damage to solar panels and other components in satellites above the atmosphere, by the heavy ionization trails they leave. However, at the earth's surface or at aircraft altitude it is known that cosmic rays, other than heavy ions, can cause soft errors in memory circuit components. Soft errors are those confusions between ones and zeroes that cause wrong contents to be stored in the memory, but without causing permanent damage to the circuit. As modern aircraft rely increasingly upon computerized and automated systems, these soft errors are important threats to safety. Protons, neutrons and pions resulting from high energy cosmic ray bombardment of the atmosphere pervade our environment. These particles do not induce damage directly by their ionization loss, but rather by reactions in the materials of the microcircuits. We have measured many cross sections for soft error upsets (SEU) in a broad range of commercial 16 Mb and 64 Mb dRAMs with accelerator beams. Here we define σ SEU = induced errors/number of sample bits x particles/cm 2 . We compare σ SEU to find relations among results for these beams, and relations to reaction cross sections in order to systematize effects. We have modelled cosmic ray effects upon the components we have studied. (Author)

  9. Technological Advancements and Error Rates in Radiation Therapy Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States)

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  10. Technological Advancements and Error Rates in Radiation Therapy Delivery

    International Nuclear Information System (INIS)

    Margalit, Danielle N.; Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K.

    2011-01-01

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)–conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women’s Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher’s exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01–0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08–0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  11. Neutron detection using soft errors in dynamic random access memories

    International Nuclear Information System (INIS)

    Darambara, D.G.; Spyrou, N.M.

    1992-01-01

    The fact that energetic alpha particles have been observed to be capable of inducing single-event upsets in integrated circuit memories has become a topic of considerable interest in the past few years. One recognized difficulty with dynamic random access memory devices (dRAMs) is that the alpha-particle 'contamination' present within the dRAM encapsulating material interact sufficiently as to corrupt stored data. The authors essentially utilized the fact that these corruptions may be induced in dRAMs by the interaction of charged particles with the chip of the dRAM itself as a basis of a hardware system for neutron detection with a view to applications in neutron imaging and elemental analysis. The design incorporates a bank of dRAMs on which the particles are incident. Initially, these particles were alpha particles from an appropriate alpha-emitting source employed to assess system parameters. The sensitivity of the device to logic state upsets by ionizing radiation is a function of design and technology parameters, inducing storage node area, node capacitance, operating voltage, minority carrier lifetime, electric fields pattern in the bulk silicon, and specific device geometry. The soft error rate of the device in a given package depends on the flux of alphas, the energy spectrum, the distribution of incident angles, the target area, the total stored charge, the collection efficiency, the cell geometry, the supply voltage, the cycle and refreshing time, and the noise margin

  12. A navigation system for percutaneous needle interventions based on PET/CT images: design, workflow and error analysis of soft tissue and bone punctures.

    Science.gov (United States)

    Oliveira-Santos, Thiago; Klaeser, Bernd; Weitzel, Thilo; Krause, Thomas; Nolte, Lutz-Peter; Peterhans, Matthias; Weber, Stefan

    2011-01-01

    Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well

  13. SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Yip, S; Rottmann, J; Berbeco, R [Brigham and Women' s Hospital, Boston, MA (United States)

    2014-06-01

    Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the

  14. SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking

    International Nuclear Information System (INIS)

    Yip, S; Rottmann, J; Berbeco, R

    2014-01-01

    Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the

  15. Dispensing error rate after implementation of an automated pharmacy carousel system.

    Science.gov (United States)

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  16. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  17. Closed-Loop Analysis of Soft Decisions for Serial Links

    Science.gov (United States)

    Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlesinger, Adam M.

    2013-01-01

    We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.

  18. Analyzing Reliability and Performance Trade-Offs of HLS-Based Designs in SRAM-Based FPGAs Under Soft Errors

    Science.gov (United States)

    Tambara, Lucas Antunes; Tonfat, Jorge; Santos, André; Kastensmidt, Fernanda Lima; Medina, Nilberto H.; Added, Nemitala; Aguiar, Vitor A. P.; Aguirre, Fernando; Silveira, Marcilei A. G.

    2017-02-01

    The increasing system complexity of FPGA-based hardware designs and shortening of time-to-market have motivated the adoption of new designing methodologies focused on addressing the current need for high-performance circuits. High-Level Synthesis (HLS) tools can generate Register Transfer Level (RTL) designs from high-level software programming languages. These tools have evolved significantly in recent years, providing optimized RTL designs, which can serve the needs of safety-critical applications that require both high performance and high reliability levels. However, a reliability evaluation of HLS-based designs under soft errors has not yet been presented. In this work, the trade-offs of different HLS-based designs in terms of reliability, resource utilization, and performance are investigated by analyzing their behavior under soft errors and comparing them to a standard processor-based implementation in an SRAM-based FPGA. Results obtained from fault injection campaigns and radiation experiments show that it is possible to increase the performance of a processor-based system up to 5,000 times by changing its architecture with a small impact in the cross section (increasing up to 8 times), and still increasing the Mean Workload Between Failures (MWBF) of the system.

  19. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    Science.gov (United States)

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  20. Soft black hole absorption rates as conservation laws

    Energy Technology Data Exchange (ETDEWEB)

    Avery, Steven G. [Brown University, Department of Physics,182 Hope St, Providence, RI, 02912 (United States); Michigan State University, Department of Physics and Astronomy,East Lansing, MI, 48824 (United States); Schwab, Burkhard UniversityW. [Harvard University, Center for Mathematical Science and Applications,1 Oxford St, Cambridge, MA, 02138 (United States)

    2017-04-10

    The absorption rate of low-energy, or soft, electromagnetic radiation by spherically symmetric black holes in arbitrary dimensions is shown to be fixed by conservation of energy and large gauge transformations. We interpret this result as the explicit realization of the Hawking-Perry-Strominger Ward identity for large gauge transformations in the background of a non-evaporating black hole. Along the way we rederive and extend previous analytic results regarding the absorption rate for the minimal scalar and the photon.

  1. Soft black hole absorption rates as conservation laws

    International Nuclear Information System (INIS)

    Avery, Steven G.; Schwab, Burkhard UniversityW.

    2017-01-01

    The absorption rate of low-energy, or soft, electromagnetic radiation by spherically symmetric black holes in arbitrary dimensions is shown to be fixed by conservation of energy and large gauge transformations. We interpret this result as the explicit realization of the Hawking-Perry-Strominger Ward identity for large gauge transformations in the background of a non-evaporating black hole. Along the way we rederive and extend previous analytic results regarding the absorption rate for the minimal scalar and the photon.

  2. Sporadic error probability due to alpha particles in dynamic memories of various technologies

    International Nuclear Information System (INIS)

    Edwards, D.G.

    1980-01-01

    The sensitivity of MOS memory components to errors induced by alpha particles is expected to increase with integration level. The soft error rate of a 65-kbit VMOS memory has been compared experimentally with that of three field-proven 16-kbit designs. The technological and design advantages of the VMOS RAM ensure an error rate which is lower than those of the 16-kbit memories. Calculation of the error probability for the 65-kbit RAM and comparison with the measurements show that for large duty cycles single particle hits lead to sensing errors and for small duty cycles cell errors caused by multiple hits predominate. (Auth.)

  3. Analysis of gross error rates in operation of commercial nuclear power stations

    International Nuclear Information System (INIS)

    Joos, D.W.; Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    Experience in operation of US commercial nuclear power plants is reviewed over a 25-month period. The reports accumulated in that period on events of human error and component failure are examined to evaluate gross operator error rates. The impact of such errors on plant operation and safety is examined through the use of proper taxonomies of error, tasks and failures. Four categories of human errors are considered; namely, operator, maintenance, installation and administrative. The computed error rates are used to examine appropriate operator models for evaluation of operator reliability. Human error rates are found to be significant to a varying degree in both BWR and PWR. This emphasizes the import of considering human factors in safety and reliability analysis of nuclear systems. The results also indicate that human errors, and especially operator errors, do indeed follow the exponential reliability model. (Auth.)

  4. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    Science.gov (United States)

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ

  5. Classification based upon gene expression data: bias and precision of error rates.

    Science.gov (United States)

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  6. A critique of recent models for human error rate assessment

    International Nuclear Information System (INIS)

    Apostolakis, G.E.

    1988-01-01

    This paper critically reviews two groups of models for assessing human error rates under accident conditions. The first group, which includes the US Nuclear Regulatory Commission (NRC) handbook model and the human cognitive reliability (HCR) model, considers as fundamental the time that is available to the operators to act. The second group, which is represented by the success likelihood index methodology multiattribute utility decomposition (SLIM-MAUD) model, relies on ratings of the human actions with respect to certain qualitative factors and the subsequent derivation of error rates. These models are evaluated with respect to two criteria: the treatment of uncertainties and the internal coherence of the models. In other words, this evaluation focuses primarily on normative aspects of these models. The principal findings are as follows: (1) Both of the time-related models provide human error rates as a function of the available time for action and the prevailing conditions. However, the HCR model ignores the important issue of state-of-knowledge uncertainties, dealing exclusively with stochastic uncertainty, whereas the model presented in the NRC handbook handles both types of uncertainty. (2) SLIM-MAUD provides a highly structured approach for the derivation of human error rates under given conditions. However, the treatment of the weights and ratings in this model is internally inconsistent. (author)

  7. Comparison of soft-input-soft-output detection methods for dual-polarized quadrature duobinary system

    Science.gov (United States)

    Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan

    2018-02-01

    Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.

  8. Individual Differences and Rating Errors in First Impressions of Psychopathy

    Directory of Open Access Journals (Sweden)

    Christopher T. A. Gillen

    2016-10-01

    Full Text Available The current study is the first to investigate whether individual differences in personality are related to improved first impression accuracy when appraising psychopathy in female offenders from thin-slices of information. The study also investigated the types of errors laypeople make when forming these judgments. Sixty-seven undergraduates assessed 22 offenders on their level of psychopathy, violence, likability, and attractiveness. Psychopathy rating accuracy improved as rater extroversion-sociability and agreeableness increased and when neuroticism and lifestyle and antisocial characteristics decreased. These results suggest that traits associated with nonverbal rating accuracy or social functioning may be important in threat detection. Raters also made errors consistent with error management theory, suggesting that laypeople overappraise danger when rating psychopathy.

  9. Estimating the annotation error rate of curated GO database sequence annotations

    Directory of Open Access Journals (Sweden)

    Brown Alfred L

    2007-05-01

    Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.

  10. The assessment of cognitive errors using an observer-rated method.

    Science.gov (United States)

    Drapeau, Martin

    2014-01-01

    Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.

  11. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  12. Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning

    Science.gov (United States)

    Meshkat, Leila; Bryant, Larry

    2014-01-01

    Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFE's as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.

  13. Propagation of measurement accuracy to biomass soft-sensor estimation and control quality.

    Science.gov (United States)

    Steinwandter, Valentin; Zahel, Thomas; Sagmeister, Patrick; Herwig, Christoph

    2017-01-01

    In biopharmaceutical process development and manufacturing, the online measurement of biomass and derived specific turnover rates is a central task to physiologically monitor and control the process. However, hard-type sensors such as dielectric spectroscopy, broth fluorescence, or permittivity measurement harbor various disadvantages. Therefore, soft-sensors, which use measurements of the off-gas stream and substrate feed to reconcile turnover rates and provide an online estimate of the biomass formation, are smart alternatives. For the reconciliation procedure, mass and energy balances are used together with accuracy estimations of measured conversion rates, which were so far arbitrarily chosen and static over the entire process. In this contribution, we present a novel strategy within the soft-sensor framework (named adaptive soft-sensor) to propagate uncertainties from measurements to conversion rates and demonstrate the benefits: For industrially relevant conditions, hereby the error of the resulting estimated biomass formation rate and specific substrate consumption rate could be decreased by 43 and 64 %, respectively, compared to traditional soft-sensor approaches. Moreover, we present a generic workflow to determine the required raw signal accuracy to obtain predefined accuracies of soft-sensor estimations. Thereby, appropriate measurement devices and maintenance intervals can be selected. Furthermore, using this workflow, we demonstrate that the estimation accuracy of the soft-sensor can be additionally and substantially increased.

  14. Estimating error rates for firearm evidence identifications in forensic science

    Science.gov (United States)

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  15. Multi-bits error detection and fast recovery in RISC cores

    International Nuclear Information System (INIS)

    Wang Jing; Yang Xing; Zhang Weigong; Shen Jiao; Qiu Keni; Zhao Yuanfu

    2015-01-01

    The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap. (paper)

  16. Multi-bits error detection and fast recovery in RISC cores

    Science.gov (United States)

    Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu

    2015-11-01

    The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.

  17. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  18. Generalizing human error rates: A taxonomic approach

    International Nuclear Information System (INIS)

    Buffardi, L.; Fleishman, E.; Allen, J.

    1989-01-01

    It is well established that human error plays a major role in malfunctioning of complex, technological systems and in accidents associated with their operation. Estimates of the rate of human error in the nuclear industry range from 20-65% of all system failures. In response to this, the Nuclear Regulatory Commission has developed a variety of techniques for estimating human error probabilities for nuclear power plant personnel. Most of these techniques require the specification of the range of human error probabilities for various tasks. Unfortunately, very little objective performance data on error probabilities exist for nuclear environments. Thus, when human reliability estimates are required, for example in computer simulation modeling of system reliability, only subjective estimates (usually based on experts' best guesses) can be provided. The objective of the current research is to provide guidelines for the selection of human error probabilities based on actual performance data taken in other complex environments and applying them to nuclear settings. A key feature of this research is the application of a comprehensive taxonomic approach to nuclear and non-nuclear tasks to evaluate their similarities and differences, thus providing a basis for generalizing human error estimates across tasks. In recent years significant developments have occurred in classifying and describing tasks. Initial goals of the current research are to: (1) identify alternative taxonomic schemes that can be applied to tasks, and (2) describe nuclear tasks in terms of these schemes. Three standardized taxonomic schemes (Ability Requirements Approach, Generalized Information-Processing Approach, Task Characteristics Approach) are identified, modified, and evaluated for their suitability in comparing nuclear and non-nuclear power plant tasks. An agenda for future research and its relevance to nuclear power plant safety is also discussed

  19. Multicenter Assessment of Gram Stain Error Rates.

    Science.gov (United States)

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  20. Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.

    Science.gov (United States)

    Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda

    2017-09-01

    We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.

  1. Soft Tissue Strain Rates in Side-Blast Incidents

    Science.gov (United States)

    2014-11-02

    increase of strain rate is known to cause the stiffening of soft connective tissues ( Haut and Haut 1997 [49]; Panjabi et al. 1998 [50]; Crisco et al...Réseau Québécois de Calcul de Haute Performance, with a peak compute performance of 27 596 GFlops). Figure 2: Torso motion imposed in the model...Yan YP. 2003. Mechanical properties of nasal fascia and periosteum. Clinical Biomechanics. 18:760-764. [49] Haut TL, Haut RC. 1997. The state of

  2. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  3. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  4. Error rate performance of narrowband multilevel CPFSK signals

    Science.gov (United States)

    Ekanayake, N.; Fonseka, K. J. P.

    1987-04-01

    The paper presents a relatively simple method for analyzing the effect of IF filtering on the performance of multilevel FM signals. Using this method, the error rate performance of narrowband FM signals is analyzed for three different detection techniques, namely limiter-discriminator detection, differential detection and coherent detection followed by differential decoding. The symbol error probabilities are computed for a Gaussian IF filter and a second-order Butterworth IF filter. It is shown that coherent detection and differential decoding yields better performance than limiter-discriminator detection and differential detection, whereas two noncoherent detectors yield approximately identical performance.

  5. 45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.

    Science.gov (United States)

    Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile

    2012-07-30

    In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.

  6. Comparing sports vision among three groups of soft tennis adolescent athletes: Normal vision, refractive errors with and without correction

    Directory of Open Access Journals (Sweden)

    Shih-Tsun Chang

    2015-01-01

    Full Text Available Background: The effect of correcting static vision on sports vision is still not clear. Aim: To examine whether sports vision (depth perception [DP], dynamic visual acuity [DVA], eye movement [EM], peripheral vision [PV], and momentary vision [MV], were different among soft tennis adolescent athletes with normal vision (Group A, with refractive error and corrected with (Group B and without eyeglasses (Group C. Setting and Design: A cross-section study was conducted. Soft tennis athletes aged 10–13 who played softball tennis for 2–5 years, and who were without any ocular diseases and without visual training for the past 3 months were recruited. Materials and Methods: DPs were measured in an absolute deviation (mm between a moving rod and fixing rod (approaching at 25 mm/s, receding at 25 mm/s, approaching at 50 mm/s, receding at 50 mm/s using electric DP tester. A smaller deviation represented better DP. DVA, EM, PV, and MV were measured on a scale from 1 (worse to 10 (best using ATHLEVISION software. Statistical Analysis: Chi-square test and Kruskal–Wallis test was used to compare the data among the three study groups. Results: A total of 73 athletes (37 in Group A, 8 in Group B, 28 in Group C were enrolled in this study. All four items of DP showed significant difference among the three study groups (P = 0.0051, 0.0004, 0.0095, 0.0021. PV displayed significant difference among the three study groups (P = 0.0044. There was no significant difference in DVA, EM, and MV among the three study groups. Conclusions: Significant better DP and PV were seen among soft tennis adolescent athletes with normal vision than those with refractive error regardless whether they had eyeglasses corrected. On the other hand, DVA, EM, and MV were similar among the three study groups.

  7. State sales tax rates for soft drinks and snacks sold through grocery stores and vending machines, 2007.

    Science.gov (United States)

    Chriqui, Jamie F; Eidson, Shelby S; Bates, Hannalori; Kowalczyk, Shelly; Chaloupka, Frank J

    2008-07-01

    Junk food consumption is associated with rising obesity rates in the United States. While a "junk food" specific tax is a potential public health intervention, a majority of states already impose sales taxes on certain junk food and soft drinks. This study reviews the state sales tax variance for soft drinks and selected snack products sold through grocery stores and vending machines as of January 2007. Sales taxes vary by state, intended retail location (grocery store vs. vending machine), and product. Vended snacks and soft drinks are taxed at a higher rate than grocery items and other food products, generally, indicative of a "disfavored" tax status attributed to vended items. Soft drinks, candy, and gum are taxed at higher rates than are other items examined. Similar tax schemes in other countries and the potential implications of these findings relative to the relationship between price and consumption are discussed.

  8. Double symbol error rates for differential detection of narrow-band FM

    Science.gov (United States)

    Simon, M. K.

    1985-01-01

    This paper evaluates the double symbol error rate (average probability of two consecutive symbol errors) in differentially detected narrow-band FM. Numerical results are presented for the special case of MSK with a Gaussian IF receive filter. It is shown that, not unlike similar results previously obtained for the single error probability of such systems, large inaccuracies in predicted performance can occur when intersymbol interference is ignored.

  9. He flow rate measurements on the engineering model for the Astro-H Soft X-ray Spectrometer dewar

    Science.gov (United States)

    Mitsuishi, I.; Ezoe, Y.; Ishikawa, K.; Ohashi, T.; Fujimoto, R.; Mitsuda, K.; Tsunematsu, S.; Yoshida, S.; Kanao, K.; Murakami, M.; DiPirro, M.; Shirron, P.

    2014-11-01

    The sixth X-ray Japanese astronomy satellite, namely Astro-H, will be launched in 2015. The Soft X-ray Spectrometer onboard the Astro-H is a 6 × 6 X-ray microcalorimeter array and provides us with both a high energy resolution of 3 years, which consequently requires that the vapor flow rate out of the helium tank should be very small knife edge devices to retain the liquid helium under zero gravity and safely vent the small amount of the helium vapor. We measured helium mass flow rates from the helium tank equipped in the engineering model dewar. We tilted the dewar at an angle of 75° so that one side of the porous plug located at the top of the helium tank attaches the liquid helium and the porous plug separates the liquid and vapor helium by thermomechanical effect. Helium mass flow rates were measured at helium tank temperatures of 1.3, 1.5 and 1.9 K. We confirmed that resultant mass flow rates are in good agreement within the systematic error or low compared to component test results and achieve all the requirements. The film flow suppression also worked normally. Therefore, we concluded that the SXS helium vent system satisfactorily performs integrated into the dewar.

  10. The 95% confidence intervals of error rates and discriminant coefficients

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2015-02-01

    Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

  11. Error rates in forensic DNA analysis: Definition, numbers, impact and communication

    NARCIS (Netherlands)

    Kloosterman, A.; Sjerps, M.; Quak, A.

    2014-01-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and

  12. A rate-jump method for characterization of soft tissues using nanoindentation techniques

    KAUST Repository

    Tang, Bin

    2012-01-01

    The biomechanical properties of soft tissues play an important role in their normal physiological and physical function, and may possibly relate to certain diseases. The advent of nanomechanical testing techniques, such as atomic force microscopy (AFM), nano-indentation and optical tweezers, enables the nano/micro-mechanical properties of soft tissues to be investigated, but in spite of the fact that biological tissues are highly viscoelastic, traditional elastic contact theory has been routinely used to analyze experimental data. In this article, a novel rate-jump protocol for treating viscoelasticity in nanomechanical data analysis is described. © 2012 The Royal Society of Chemistry.

  13. The pitfalls of ultrasonography in the evaluation of soft tissue masses

    International Nuclear Information System (INIS)

    Kwok, Henry CK.; Pinto, Clinton H.; Doyle, Anthony J.

    2012-01-01

    Ultrasonography is associated with a high error rate in the evaluation of soft tissue masses. The purposes of this study were to examine the nature of the diagnostic errors and to identify areas in which reporting could be improved. Patients who had soft tissue tumours and received ultrasonography during a 10-year period (1999–2009) were identified from a local tumour registry. The sonographic and pathological diagnoses were categorised as either ‘benign’ or ‘non-benign’. The accuracy of ultrasonography was assessed by correlating the sonographic with the pathological diagnostic categories. Recommendations from radiologists, where offered, were assessed for their appropriateness in the context of the pathological diagnosis. One hundred seventy-five patients received ultrasonography, of which 60 had ‘non-benign’ lesions and 115 had ‘benign’ lesions. Ultrasonography correctly diagnosed 35 and incorrectly diagnosed seven of the 60 ‘non-benign’ cases, and did not suggest a diagnosis in 18 cases. Most of the diagnostic errors related to misdiagnosing soft tissue tumours as haematomas (four out of seven). Recommendations for further management were offered by the radiologists in 144 cases, of which 52 had ‘non-benign’ pathology. There were eight ‘non-benign’ cases where no recommendation was offered, and the sonographic diagnosis was either incorrect or unavailable. Ultrasonography lacks accuracy in the evaluation of soft tissue masses. Ongoing education is required to improve awareness of the limitations with its use. These limitations should be highlighted to the referrers, especially those who do not have specific training in this area.

  14. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency.

    Science.gov (United States)

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 2nd graders and 974 3rd graders. Participants were assessed using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and the Woodcock Reading Mastery Test (WRMT) Passage Comprehension subtest. Results from this study further illuminate the significant relationships between error rate, oral reading fluency, and reading comprehension performance, and grade-specific guidelines for appropriate error rate levels. Low oral reading fluency and high error rates predict the level of passage comprehension performance. For second grade students below benchmark, a fall assessment error rate of 28% predicts that student comprehension performance will be below average. For third grade students below benchmark, the fall assessment cut point is 14%. Instructional implications of the findings are discussed.

  15. Error-Rate Bounds for Coded PPM on a Poisson Channel

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  16. Voice recognition versus transcriptionist: error rated and productivity in MRI reporting

    International Nuclear Information System (INIS)

    Strahan, Rodney H.; Schneider-Kolsky, Michal E.

    2010-01-01

    Full text: Purpose: Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Methods: Fifty MRI reports generated by VR and 50 finalised MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Results: Forty-two % and 30% of the finalised VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Conclusion: Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR.

  17. Calculating Error Percentage in Using Water Phantom Instead of Soft Tissue Concerning 103Pd Brachytherapy Source Distribution via Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    OL Ahmadi

    2015-12-01

    Full Text Available Introduction: 103Pd is a low energy source, which is used in brachytherapy. According to the standards of American Association of Physicists in Medicine, dosimetric parameters determination of brachytherapy sources before the clinical application was considered significantly important. Therfore, the present study aimed to compare the dosimetric parameters of the target source using the water phantom and soft tissue. Methods: According to the TG-43U1 protocol, the dosimetric parameters were compared around the 103Pd source in regard with water phantom with the density of 0.998 gr/cm3 and the soft tissue with the density of 1.04 gr/cm3 on the longitudinal and transverse axes using the MCNP4C code and the relative differences were compared between the both conditions. Results: The simulation results indicated that the dosimetric parameters depended on the radial dose function and the anisotropy function in the application of the water phantom instead of soft tissue up to a distance of 1.5 cm,  between which a good consistency was observed. With increasing the distance, the difference increased, so as within 6 cm from the source, this difference increased to 4%. Conclusions: The results of  the soft tissue phantom compared with those of the water phantom indicated 4% relative difference at a distance of 6 cm from the source. Therefore, the results of the water phantom with a maximum error of 4% can be used in practical applications instead of soft tissue. Moreover, the amount of differences obtained in each distance regarding using the soft tissue phantom could be corrected.

  18. Process error rates in general research applications to the Human ...

    African Journals Online (AJOL)

    Objective. To examine process error rates in applications for ethics clearance of health research. Methods. Minutes of 586 general research applications made to a human health research ethics committee (HREC) from April 2008 to March 2009 were examined. Rates of approval were calculated and reasons for requiring ...

  19. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    Science.gov (United States)

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  20. Error Recovery Properties and Soft Decoding of Quasi-Arithmetic Codes

    Directory of Open Access Journals (Sweden)

    Christine Guillemot

    2007-08-01

    Full Text Available This paper first introduces a new set of aggregated state models for soft-input decoding of quasi arithmetic (QA codes with a termination constraint. The decoding complexity with these models is linear with the sequence length. The aggregation parameter controls the tradeoff between decoding performance and complexity. It is shown that close-to-optimal decoding performance can be obtained with low values of the aggregation parameter, that is, with a complexity which is significantly reduced with respect to optimal QA bit/symbol models. The choice of the aggregation parameter depends on the synchronization recovery properties of the QA codes. This paper thus describes a method to estimate the probability mass function (PMF of the gain/loss of symbols following a single bit error (i.e., of the difference between the number of encoded and decoded symbols. The entropy of the gain/loss turns out to be the average amount of information conveyed by a length constraint on both the optimal and aggregated state models. This quantity allows us to choose the value of the aggregation parameter that will lead to close-to-optimal decoding performance. It is shown that the optimum position for the length constraint is not the last time instant of the decoding process. This observation leads to the introduction of a new technique for robust decoding of QA codes with redundancy which turns out to outperform techniques based on the concept of forbidden symbol.

  1. The nearest neighbor and the bayes error rates.

    Science.gov (United States)

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.

  2. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    Science.gov (United States)

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  3. Voice recognition versus transcriptionist: error rates and productivity in MRI reporting.

    Science.gov (United States)

    Strahan, Rodney H; Schneider-Kolsky, Michal E

    2010-10-01

    Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Fifty MRI reports generated by VR and 50 finalized MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Forty-two % and 30% of the finalized VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR. © 2010 The Authors. Journal of Medical Imaging and Radiation Oncology © 2010 The Royal Australian and New Zealand College of Radiologists.

  4. A framework to assess diagnosis error probabilities in the advanced MCR

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ar Ryum; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Kim, Jong Hyun [Chosun University, Gwangju (Korea, Republic of); Jang, Inseok; Park, Jinkyun [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The Institute of Nuclear Power Operations (INPO)’s operating experience database revealed that about 48% of the total events in world NPPs for 2 years (2010-2011) happened due to human errors. The purposes of human reliability analysis (HRA) method are to evaluate the potential for, and mechanism of, human errors that may affect plant safety. Accordingly, various HRA methods have been developed such as technique for human error rate prediction (THERP), simplified plant analysis risk human reliability assessment (SPAR-H), cognitive reliability and error analysis method (CREAM) and so on. Many researchers have asserted that procedure, alarm, and display are critical factors to affect operators’ generic activities, especially for diagnosis activities. None of various HRA methods was explicitly designed to deal with digital systems. SCHEME (Soft Control Human error Evaluation MEthod) considers only for the probability of soft control execution error in the advanced MCR. The necessity of developing HRA methods in various conditions of NPPs has been raised. In this research, the framework to estimate diagnosis error probabilities in the advanced MCR was suggested. The assessment framework was suggested by three steps. The first step is to investigate diagnosis errors and calculate their probabilities. The second step is to quantitatively estimate PSFs’ weightings in the advanced MCR. The third step is to suggest the updated TRC model to assess the nominal diagnosis error probabilities. Additionally, the proposed framework was applied by using the full-scope simulation. Experiments conducted in domestic full-scope simulator and HAMMLAB were used as data-source. Total eighteen tasks were analyzed and twenty-three crews participated in.

  5. Soft errors in 10-nm-scale magnetic tunnel junctions exposed to high-energy heavy-ion radiation

    Science.gov (United States)

    Kobayashi, Daisuke; Hirose, Kazuyuki; Makino, Takahiro; Onoda, Shinobu; Ohshima, Takeshi; Ikeda, Shoji; Sato, Hideo; Inocencio Enobio, Eli Christopher; Endoh, Tetsuo; Ohno, Hideo

    2017-08-01

    The influences of various types of high-energy heavy-ion radiation on 10-nm-scale CoFeB-MgO magnetic tunnel junctions with a perpendicular easy axis have been investigated. In addition to possible latent damage, which has already been pointed out in previous studies, high-energy heavy-ion bombardments demonstrated that the magnetic tunnel junctions may exhibit clear flips between their high- and low-resistance states designed for a digital bit 1 or 0. It was also demonstrated that flipped magnetic tunnel junctions still may provide proper memory functions such as read, write, and hold capabilities. These two findings proved that high-energy heavy ions can produce recoverable bit flips in magnetic tunnel junctions, i.e., soft errors. Data analyses suggested that the resistance flips stem from magnetization reversals of the ferromagnetic layers and that each of them is caused by a single strike of heavy ions. It was concurrently found that an ion strike does not always result in a flip, suggesting a stochastic process behind the flip. Experimental data also showed that the flip phenomenon is dependent on the device and heavy-ion characteristics. Among them, the diameter of the device and the linear energy transfer of the heavy ions were revealed as the key parameters. From their dependences, the physical mechanism behind the flip was discussed. It is likely that a 10-nm-scale ferromagnetic disk loses its magnetization due to a local temperature increase induced by a single strike of heavy ions; this demagnetization is followed by a cooling period associated with a possible stochastic recovery process. On the basis of this hypothesis, a simple analytical model was developed, and it was found that the model accounts for the results reasonably well. This model also predicted that magnetic tunnel junctions provide sufficiently high soft-error reliability for use in space, highlighting their advantage over their counterpart conventional semiconductor memories.

  6. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    Science.gov (United States)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  7. Detecting Soft Errors in Stencil based Computations

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, V. [Univ. of Utah, Salt Lake City, UT (United States); Gopalkrishnan, G. [Univ. of Utah, Salt Lake City, UT (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  8. Analytical expression for the bit error rate of cascaded all-optical regenerators

    DEFF Research Database (Denmark)

    Mørk, Jesper; Öhman, Filip; Bischoff, S.

    2003-01-01

    We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed.......We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed....

  9. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    Science.gov (United States)

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  10. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    Science.gov (United States)

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  11. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    International Nuclear Information System (INIS)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-01-01

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  12. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    Science.gov (United States)

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  13. Safe and effective error rate monitors for SS7 signaling links

    Science.gov (United States)

    Schmidt, Douglas C.

    1994-04-01

    This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.

  14. Progressive and Error-Resilient Transmission Strategies for VLC Encoded Signals over Noisy Channels

    Directory of Open Access Journals (Sweden)

    Guillemot Christine

    2006-01-01

    Full Text Available This paper addresses the issue of robust and progressive transmission of signals (e.g., images, video encoded with variable length codes (VLCs over error-prone channels. This paper first describes bitstream construction methods offering good properties in terms of error resilience and progressivity. In contrast with related algorithms described in the literature, all proposed methods have a linear complexity as the sequence length increases. The applicability of soft-input soft-output (SISO and turbo decoding principles to resulting bitstream structures is investigated. In addition to error resilience, the amenability of the bitstream construction methods to progressive decoding is considered. The problem of code design for achieving good performance in terms of error resilience and progressive decoding with these transmission strategies is then addressed. The VLC code has to be such that the symbol energy is mainly concentrated on the first bits of the symbol representation (i.e., on the first transitions of the corresponding codetree. Simulation results reveal high performance in terms of symbol error rate (SER and mean-square reconstruction error (MSE. These error-resilience and progressivity properties are obtained without any penalty in compression efficiency. Codes with such properties are of strong interest for the binarization of -ary sources in state-of-the-art image, and video coding systems making use of, for example, the EBCOT or CABAC algorithms. A prior statistical analysis of the signal allows the construction of the appropriate binarization code.

  15. Comparing Response Times and Error Rates in a Simultaneous Masking Paradigm

    Directory of Open Access Journals (Sweden)

    F Hermens

    2014-08-01

    Full Text Available In simultaneous masking, performance on a foveally presented target is impaired by one or more flanking elements. Previous studies have demonstrated strong effects of the grouping of the target and the flankers on the strength of masking (e.g., Malania, Herzog & Westheimer, 2007. These studies have predominantly examined performance by measuring offset discrimination thresholds as a measure of performance, and it is therefore unclear whether other measures of performance provide similar outcomes. A recent study, which examined the role of grouping on error rates and response times in a speeded vernier offset discrimination task, similar to that used by Malania et al. (2007, suggested a possible dissociation between the two measures, with error rates mimicking threshold performance, but response times showing differential results (Panis & Hermens, 2014. We here report the outcomes of three experiments examining this possible dissociation, and demonstrate an overall similar pattern of results for error rates and response times across a broad range of mask layouts. Moreover, the pattern of results in our experiments strongly correlates with threshold performance reported earlier (Malania et al., 2007. Our results suggest that outcomes in a simultaneous masking paradigm do not critically depend on the outcome measure used, and therefore provide evidence for a common underlying mechanism.

  16. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Directory of Open Access Journals (Sweden)

    Lukas Falat

    2016-01-01

    Full Text Available This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  17. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Science.gov (United States)

    Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450

  18. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  19. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa

  20. Aniseikonia quantification: error rate of rule of thumb estimation.

    Science.gov (United States)

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  1. Assessment of salivary flow rate: biologic variation and measure error.

    NARCIS (Netherlands)

    Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.

    2004-01-01

    OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated

  2. Dimensioning of multiservice links taking account of soft blocking

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk; Stepanov, S.N.; Kostrov, A.V.

    2006-01-01

    of a multiservice link taking into account the possibility of soft blocking. An approximate algorithm for estimation of main performance measures is constructed. The error of estimation is numerically studied for different types of soft blocking. The optimal procedure of dimensioning is suggested....

  3. The type I error rate for in vivo Comet assay data when the hierarchical structure is disregarded

    DEFF Research Database (Denmark)

    Hansen, Merete Kjær; Kulahci, Murat

    the type I error rate is greater than the nominal _ at 0.05. Closed-form expressions based on scaled F-distributions using the Welch-Satterthwaite approximation are provided to show how the type I error rate is aUected. With this study we hope to motivate researchers to be more precise regarding......, and this imposes considerable impact on the type I error rate. This study aims to demonstrate the implications that result from disregarding the hierarchical structure. DiUerent combinations of the factor levels as they appear in a literature study give type I error rates up to 0.51 and for all combinations...

  4. A Conceptual Framework of Human Reliability Analysis for Execution Human Error in NPP Advanced MCRs

    Energy Technology Data Exchange (ETDEWEB)

    Jang, In Seok; Kim, Ar Ryum; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-08-15

    The operation environment of Main Control Rooms (MCRs) in Nuclear Power Plants (NPPs) has changed with the adoption of new human-system interfaces that are based on computer-based technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called Advanced MCRs. Among the many features of Advanced MCRs, soft controls are a particularly important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, and touch screens, operators can select a specific screen, then choose the controller, and finally manipulate the given devices. Due to the different interfaces between soft control and hardwired conventional type control, different human error probabilities and a new Human Reliability Analysis (HRA) framework should be considered in the HRA for advanced MCRs. In other words, new human error modes should be considered for interface management tasks such as navigation tasks, and icon (device) selection tasks in monitors and a new framework of HRA method taking these newly generated human error modes into account should be considered. In this paper, a conceptual framework for a HRA method for the evaluation of soft control execution human error in advanced MCRs is suggested by analyzing soft control tasks.

  5. A Conceptual Framework of Human Reliability Analysis for Execution Human Error in NPP Advanced MCRs

    International Nuclear Information System (INIS)

    Jang, In Seok; Kim, Ar Ryum; Seong, Poong Hyun; Jung, Won Dea

    2014-01-01

    The operation environment of Main Control Rooms (MCRs) in Nuclear Power Plants (NPPs) has changed with the adoption of new human-system interfaces that are based on computer-based technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called Advanced MCRs. Among the many features of Advanced MCRs, soft controls are a particularly important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, and touch screens, operators can select a specific screen, then choose the controller, and finally manipulate the given devices. Due to the different interfaces between soft control and hardwired conventional type control, different human error probabilities and a new Human Reliability Analysis (HRA) framework should be considered in the HRA for advanced MCRs. In other words, new human error modes should be considered for interface management tasks such as navigation tasks, and icon (device) selection tasks in monitors and a new framework of HRA method taking these newly generated human error modes into account should be considered. In this paper, a conceptual framework for a HRA method for the evaluation of soft control execution human error in advanced MCRs is suggested by analyzing soft control tasks

  6. Reproducibility of the pink esthetic score--rating soft tissue esthetics around single-implant restorations with regard to dental observer specialization.

    Science.gov (United States)

    Gehrke, Peter; Lobert, Markus; Dhom, Günter

    2008-01-01

    The pink esthetic score (PES) evaluates the esthetic outcome of soft tissue around implant-supported single crowns in the anterior zone by awarding seven points for the mesial and distal papilla, soft-tissue level, soft-tissue contour, soft-tissue color, soft-tissue texture, and alveolar process deficiency. The aim of this study was to measure the reproducibility of the PES and assess the influence exerted by the examiner's degree of dental specialization. Fifteen examiners (three general dentists, three oral maxillofacial surgeons, three orthodontists, three postgraduate students in implant dentistry, and three lay people) applied the PES to 30 implant-supported single restorations twice at an interval of 4 weeks. Using a 0-1-2 scoring system, 0 being the lowest, 2 being the highest value, the maximum achievable PES was 14. At the second assessment, the photographs were scored in reverse order. Differences between the two assessments were evaluated with the Spearman's rank correlation coefficient (R). The Wilcoxon signed-rank test was used for comparisons of differences between the ratings. A significance level of p esthetic restorations showed the smallest deviations. Orthodontists were found to have assigned significantly poorer ratings than any other group. The assessment of postgraduate students and laypersons were the most favorable. The PES allows for a more objective appraisal of the esthetic short- and long-term results of various surgical and prosthetic implant procedures. It reproducibly evaluates the peri-implant soft tissue around single-implant restorations and results in good intra-examiner agreement. However, an effect of observer specialization on rating soft-tissue esthetics can be shown.

  7. Invariance of the bit error rate in the ancilla-assisted homodyne detection

    International Nuclear Information System (INIS)

    Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide

    2010-01-01

    We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization of the homodyne detection scheme.

  8. Competence in Streptococcus pneumoniae is regulated by the rate of ribosomal decoding errors.

    Science.gov (United States)

    Stevens, Kathleen E; Chang, Diana; Zwack, Erin E; Sebert, Michael E

    2011-01-01

    Competence for genetic transformation in Streptococcus pneumoniae develops in response to accumulation of a secreted peptide pheromone and was one of the initial examples of bacterial quorum sensing. Activation of this signaling system induces not only expression of the proteins required for transformation but also the production of cellular chaperones and proteases. We have shown here that activity of this pathway is sensitively responsive to changes in the accuracy of protein synthesis that are triggered by either mutations in ribosomal proteins or exposure to antibiotics. Increasing the error rate during ribosomal decoding promoted competence, while reducing the error rate below the baseline level repressed the development of both spontaneous and antibiotic-induced competence. This pattern of regulation was promoted by the bacterial HtrA serine protease. Analysis of strains with the htrA (S234A) catalytic site mutation showed that the proteolytic activity of HtrA selectively repressed competence when translational fidelity was high but not when accuracy was low. These findings redefine the pneumococcal competence pathway as a response to errors during protein synthesis. This response has the capacity to address the immediate challenge of misfolded proteins through production of chaperones and proteases and may also be able to address, through genetic exchange, upstream coding errors that cause intrinsic protein folding defects. The competence pathway may thereby represent a strategy for dealing with lesions that impair proper protein coding and for maintaining the coding integrity of the genome. The signaling pathway that governs competence in the human respiratory tract pathogen Streptococcus pneumoniae regulates both genetic transformation and the production of cellular chaperones and proteases. The current study shows that this pathway is sensitively controlled in response to changes in the accuracy of protein synthesis. Increasing the error rate during

  9. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Symbol error rate performance evaluation of the LM37 multimegabit telemetry modulator-demodulator unit

    Science.gov (United States)

    Malek, H.

    1981-01-01

    The LM37 multimegabit telemetry modulator-demodulator unit was tested for evaluation of its symbol error rate (SER) performance. Using an automated test setup, the SER tests were carried out at various symbol rates and signal-to-noise ratios (SNR), ranging from +10 to -10 dB. With the aid of a specially designed error detector and a stabilized signal and noise summation unit, measurement of the SER at low SNR was possible. The results of the tests show that at symbol rates below 20 megasymbols per second (MS)s) and input SNR above -6 dB, the SER performance of the modem is within the specified 0.65 to 1.5 dB of the theoretical error curve. At symbol rates above 20 MS/s, the specification is met at SNR's down to -2 dB. The results of the SER tests are presented with the description of the test setup and the measurement procedure.

  11. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    Science.gov (United States)

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  12. Modelling hard and soft states of Cygnus X-1 with propagating mass accretion rate fluctuations

    Science.gov (United States)

    Rapisarda, S.; Ingram, A.; van der Klis, M.

    2017-12-01

    We present a timing analysis of three Rossi X-ray Timing Explorer observations of the black hole binary Cygnus X-1 with the propagating mass accretion rate fluctuations model PROPFLUC. The model simultaneously predicts power spectra, time lags and coherence of the variability as a function of energy. The observations cover the soft and hard states of the source, and the transition between the two. We find good agreement between model predictions and data in the hard and soft states. Our analysis suggests that in the soft state the fluctuations propagate in an optically thin hot flow extending up to large radii above and below a stable optically thick disc. In the hard state, our results are consistent with a truncated disc geometry, where the hot flow extends radially inside the inner radius of the disc. In the transition from soft to hard state, the characteristics of the rapid variability are too complex to be successfully described with PROPFLUC. The surface density profile of the hot flow predicted by our model and the lack of quasi-periodic oscillations in the soft and hard states suggest that the spin of the black hole is aligned with the inner accretion disc and therefore probably with the rotational axis of the binary system.

  13. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  14. Error rates and resource overheads of encoded three-qubit gates

    Science.gov (United States)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    2017-10-01

    A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.

  15. Effects of soft control in the nuclear power plants emergency operation condition

    International Nuclear Information System (INIS)

    Al Harbi, Mohamed Ali Salem; Kim, Ar Ryum; Jang, Inseok; Seong, Poong Hyun; Shirouzu, Shigenori; Katayama, Sotetsu; Kang, Hyun Gook

    2013-01-01

    Highlights: ► We investigated the effect of touch screen, known as ESCM, which is a soft control, on emergency operation of nuclear plants. ► Experiments clearly show the occurrence of more human errors in ESCM task groups. ► Physiological measures (ECG, EEG, nose temperature) were analyzed. ► Higher stress levels were consistently observed in ESCM task groups. - Abstract: In addition to the evolution from buttons and switches to the computer-based consoles, the operator may interact with the plant via soft controls. Soft controls are input interfaces connected with control and display systems that are mediated by software, rather than by direct physical connections. However use of soft control may cause unknown difficulties of operation and provide new opportunities of human errors. This study is to investigate the effect of the new interface to human errors in the emergency operation. Based on the emergency operation procedure, the human error modes were identified by using systematic human error reduction and prediction approach. Experiments with 21 graduate students in main control room mockup in the nuclear engineering departments of universities in UAE and Korea were conducted to observe the operators’ behavior resulted from the use of new input interface (Emergency safety feature-component control system Soft Control Module, ESCM). Physiological parameters such as electroencephalogram, electrocardiogram and skin temperature were measured to assess the stress level of the subjects. The experimental results showed more human errors during ESCM tasks than non-ESCM tasks. The analysis of the physiological measurements also demonstrated that subjects were in high stress level during the ESCM tasks in comparison with non-ESCM tasks. It is notable that this study was performed with graduate students without consideration of their expertise levels. Different behaviors of the novice and the expert groups were also discussed

  16. Performance limitations of imaging microscopes for soft x-ray applications

    International Nuclear Information System (INIS)

    Lewotsky, K.L.; Kotha, A.; Harvey, J.E.

    1993-01-01

    Recent advances in the fabrication of nanometer-scale multilayer structures have yielded high-reflectance mirrors operating at near-normal incidence for soft X-ray wavelengths. These developments have stimulated renewed interest in high-resolution soft X-ray microscopy. The design of a Schwarzschild imaging microscope for soft X-ray applications has been reported by Hoover and Shealy. Based upon a geometrical ray-trace analysis of the residual design errors, diffraction-limited performance at a wavelength of 100 angstrom was predicted over an object size (diameter) of 0.4 mm. In this paper the authors expand upon the previous analysis of the Schwarzschild X-ray microscope design by determining the total image degradation due to diffraction, geometrical aberrations, alignment errors, and realistic assumptions concerning optical fabrication errors. NASA's Optical Surface Analysis Code (OSAC) is used to model the image degradation effects of residual surface irregularities over the entire range of relevant spatial frequencies. This includes small angle scattering effects due to mid spatial frequency surface errors falling between the traditional figure and finish specifications. Performance predictions are presented parametrically to provide some insight into the optical fabrication and alignment tolerances necessary to meet a particular image quality requirement

  17. Bit Error Rate Minimizing Channel Shortening Equalizers for Single Carrier Cyclic Prefixed Systems

    National Research Council Canada - National Science Library

    Martin, Richard K; Vanbleu, Koen; Ysebaert, Geert

    2007-01-01

    .... Previous work on channel shortening has largely been in the context of digital subscriber lines, a wireline system that allows bit allocation, thus it has focused on maximizing the bit rate for a given bit error rate (BER...

  18. Symbol and Bit Error Rates Analysis of Hybrid PIM-CDMA

    Directory of Open Access Journals (Sweden)

    Ghassemlooy Z

    2005-01-01

    Full Text Available A hybrid pulse interval modulation code-division multiple-access (hPIM-CDMA scheme employing the strict optical orthogonal code (SOCC with unity and auto- and cross-correlation constraints for indoor optical wireless communications is proposed. In this paper, we analyse the symbol error rate (SER and bit error rate (BER of hPIM-CDMA. In the analysis, we consider multiple access interference (MAI, self-interference, and the hybrid nature of the hPIM-CDMA signal detection, which is based on the matched filter (MF. It is shown that the BER/SER performance can only be evaluated if the bit resolution conforms to the condition set by the number of consecutive false alarm pulses that might occur and be detected, so that one symbol being divided into two is unlikely to occur. Otherwise, the probability of SER and BER becomes extremely high and indeterminable. We show that for a large number of users, the BER improves when increasing the code weight . The results presented are compared with other modulation schemes.

  19. Soft biometrics in conjunction with optics based biohashing

    Science.gov (United States)

    Saini, Nirmala; Sinha, Aloka

    2011-02-01

    Biometric systems are gaining importance because of increased reliability for authentication and identification. A biometric recognition technique has been proposed earlier, in which biohashing code has been generated by using a joint transform correlator. The main drawback of the base biohashing method is the low performance of the technique when an "impostor" steals the pseudo-random numbers of the genuine and tries to authenticate as genuine. In the proposed technique, soft biometrics of the same person has been used to improve the discrimination between the genuine and the impostor populations. The soft biometrics are those characteristics that provide some information about the individual, but lack the distinctiveness and permanence to sufficiently differentiate between any two individuals. In the enrolment process, biohash code of the target face images has been integrated with the different soft biometrics of the same person. The obtained code has been stored for verification. In the verification process, biohash code of the face image to be verified is again diffused with the soft biometric of the person. The obtained code is matched with the stored code of the target. The receiving operating characteristic (ROC) curve and the equal error rate (EER) have been used to evaluate the performance of the technique. A detailed study has been carried out to find out the optimum values of the weighting factor for the diffusion process.

  20. Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMS

    International Nuclear Information System (INIS)

    Diehl, S.E.; Ochoa, A. Jr.; Dressendorfer, P.V.; Koga, R.; Kolasinski, W.A.

    1982-06-01

    Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors

  1. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  2. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin; Genton, Marc G.

    2013-01-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  3. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.

    2012-12-06

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

  4. FPGA-based Bit-Error-Rate Tester for SEU-hardened Optical Links

    CERN Document Server

    Detraz, S; Moreira, P; Papadopoulos, S; Papakonstantinou, I; Seif El Nasr, S; Sigaud, C; Soos, C; Stejskal, P; Troska, J; Versmissen, H

    2009-01-01

    The next generation of optical links for future High-Energy Physics experiments will require components qualified for use in radiation-hard environments. To cope with radiation induced single-event upsets, the physical layer protocol will include Forward Error Correction (FEC). Bit-Error-Rate (BER) testing is a widely used method to characterize digital transmission systems. In order to measure the BER with and without the proposed FEC, simultaneously on several devices, a multi-channel BER tester has been developed. This paper describes the architecture of the tester, its implementation in a Xilinx Virtex-5 FPGA device and discusses the experimental results.

  5. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System

    Directory of Open Access Journals (Sweden)

    Jiayu Zhang

    2018-05-01

    Full Text Available The Semi-Strapdown Inertial Navigation System (SSINS provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS inertial measurement unit (MIMU outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.

  6. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  7. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    Science.gov (United States)

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  8. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.; Ghaeb, Jasim A.; Jazzar, Saleh; Saraereh, Omar A.

    2012-01-01

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate

  9. Low dose rate gamma ray induced loss and data error rate of multimode silica fibre links

    International Nuclear Information System (INIS)

    Breuze, G.; Fanet, H.; Serre, J.

    1993-01-01

    Fiber optics data transmission from numerous multiplexed sensors, is potentially attractive for nuclear plant applications. Multimode silica fiber behaviour during steady state gamma ray exposure is studied as a joint programme between LETI CE/SACLAY and EDF Renardieres: transmitted optical power and bit error rate have been measured on a 100 m optical fiber

  10. Linear transceiver design for nonorthogonal amplify-and-forward protocol using a bit error rate criterion

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2014-04-01

    The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol. © 2002-2012 IEEE.

  11. Assessment of the rate and etiology of pharmacological errors by nurses of two major teaching hospitals in Shiraz

    Directory of Open Access Journals (Sweden)

    Fatemeh Vizeshfar

    2015-06-01

    Full Text Available Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic information, self report faults, etiology of medication error and researcher observations. The results showed that nurses’ faults in pediatric wards were 51/6% and in adults wards were 47/4%. The most common faults in adults wards were later or sooner drug administration (48/6%, and administration of drugs without prescription and administering wrong drugs were the most common medication errors in pediatric wards (each one 49/2%. According to researchers’ observations, the medication error rate of 57/9% was rated low in adults wards and the rate of 69/4% in pediatric wards was rated moderate. The most frequent medication errors in both adults and pediatric wards were that nurses didn’t explain the reason and type of drug they were going to administer to patients. Independent T-test showed a significant change in faults observations in pediatric wards (p=0.000 and in adults wards (p=0.000. Several studies have shown medication errors all over the world, especially in pediatric wards. However, by designing a suitable report system and use a multi disciplinary approach, we can be reduced the occurrence of medication errors and its negative consequences.

  12. Hardware Implementation of A Non-RLL Soft-decoding Beacon-based Visible Light Communication Receiver

    OpenAIRE

    Nguyen, Duc-Phuc; Le, Dinh-Dung; Tran, Thi-Hong; Huynh, Huu-Thuan; Nakashima, Yasuhiko

    2018-01-01

    Visible light communication (VLC)-based beacon systems, which usually transmit identification (ID) information in small-size data frames are applied widely in indoor localization applications. There is one fact that flicker of LED light should be avoid in any VLC systems. Current flicker mitigation solutions based on run-length limited (RLL) codes suffer from reduced code rates, or are limited to hard-decoding forward error correction (FEC) decoders. Recently, soft-decoding techniques of RLL-...

  13. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    Science.gov (United States)

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  14. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, D; Ehler, E [University of Minnesota, Minneapolis, MN (United States)

    2015-06-15

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing.

  15. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    International Nuclear Information System (INIS)

    Sterling, D; Ehler, E

    2015-01-01

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing

  16. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    Science.gov (United States)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  17. Error rate of automated calculation for wound surface area using a digital photography.

    Science.gov (United States)

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. A burst-mode photon counting receiver with automatic channel estimation and bit rate detection

    Science.gov (United States)

    Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.

    2016-04-01

    We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.

  19. The effect of speaking rate on serial-order sound-level errors in normal healthy controls and persons with aphasia.

    Science.gov (United States)

    Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I

    Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions

  20. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei

    2014-06-01

    Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.

  1. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  2. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan; Alouini, Mohamed-Slim; Aissa, Sonia

    2012-01-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  4. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2012-05-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  5. Error rates of a full-duplex system over EGK fading channels subject to laplacian interference

    KAUST Repository

    Soury, Hamza

    2017-07-31

    This paper develops a mathematical paradigm to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). Particularly, we study the dominant intra-cell interferer problem that appears between HD users scheduled on the same FD-channel. The distribution of the dominant interference is first characterized via its distribution function, which is derived in closed-form. Assuming Nakagami-m fading, the probability of error for different modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function of the signal-to-interference-ratio when compared to an idealized HD interference and noise free BS operation.

  6. High dose rate brachytherapy for the treatment of soft tissue sarcoma of the extremity

    International Nuclear Information System (INIS)

    Speight, J.L.; Streeter, O.E.; Chawla, S.; Menendez, L.E.

    1996-01-01

    Purpose: we examined the role of preoperative neoadjuvant chemoradiation and adjuvant high-dose rate brachytherapy on the management of prognostically unfavorable soft tissue sarcomas of the extremities. Our goal was to examine the effect of high dose rate interstitial brachytherapy (HDR IBT) on reducing the risk of local recurrence following limb-sparing resection, as well as shortening treatment duration. Materials and methods: eleven patients, ranging in age from 31 to 73 years old, with soft tissue sarcoma of the extremity were treated at USC/Norris Comprehensive Cancer Center during 1994 and 1995. All patients had biopsy proven soft tissue sarcoma, and all were suitable candidates for limb-sparing surgery. All lesions were greater than 5cm in size and were primarily high grade. Tumor histologies included malignant fibrous histiocytoma (45%), liposarcoma (18%) and leiomyosarcoma, synovial cell sarcoma and spindle cell sarcoma (36%). Sites of tumor origin were the lower extremity (55%), upper extremity (18%) and buttock (9%), 1 patient (9%) had lesions in both the upper and lower extremity. Patients received HDR IBT following combined chemotherapy and external beam irradiation (EBRT) and en bloc resection of the sarcoma. Neoadjuvant chemotherapy consisted of three to four cycles of either Ifosfamide/Mesna with or without Adriamycin, or Mesna, Adriamycin, Ifosfamide and Dacarbazine. One patient received Cis-platin in addition to Ifos/Adr. A minimum of two cycles of chemotherapy were administered prior to EBRT. Additional cycles of chemotherapy were completed concurrently with EBRT but prior to HDR IBT. Preoperative EBRT doses ranging from 40 to 59.4 Gy were given in daily fractions of 180 to 200cGy. Following en bloc resection, HDR IBT was administered using the Omnitron tm 2000 remote afterloading system. Doses ranging from 13 to 30 Gy were delivered to the surgical tumor bed at depths of 0.5mm to 0.75mm from the radioactive source. Results: median follow-up was

  7. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  8. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2011-06-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  9. MANAGEMENT SOFT-FACTORS IN INDUSTRIES

    Directory of Open Access Journals (Sweden)

    L. V. Fatkin

    2012-01-01

    Full Text Available No proper attention is given in existing management theories and concepts to systematization and analysis of non-material management factors, so-called «soft-factors». In industries, management soft-factors may be treated in a broader way. An example of a broader treatment of management soft-factors is given for the system of state regulation of foreign trade activities in industries along with specification, determination and rating of organizational and administrative management soft-factors.

  10. Radiosensitivity of soft tissue sarcomas

    International Nuclear Information System (INIS)

    Hirano, Toru; Iwasaki, Katsuro; Suzuki, Ryohei; Monzen, Yoshio; Hombo, Zenichiro

    1989-01-01

    The correlation between the effectiveness of radiation therapy and the histology of soft tissue sarcomas was investigated. Of 31 cases with a soft tissue sarcoma of an extremity treated by conservative surgery and postoperative radiation of 3,000-6,000 cGy, local recurrence occurred in 12; 5 out of 7 synovial sarcomas, 4 of 9 MFH, one of 8 liposarcomas, none of 4 rhabdomyosarcomas and 2 of 3 others. As for the histological subtyping, the 31 soft tissue sarcomas were divided into spindle cell, pleomorphic cell, myxoid and round cell type, and recurrence rates were 75%, 33.3%, 16.7% and 0%, respectively. From the remarkable difference in recurrent rate, it was suggested that round cell and myxoid type of soft tissue sarcomas showed a high radiosensitivity compared to the spindle cell type with low sensitivity. Clarifying the degree of radiosensitivity is helpful in deciding on the management of limb salvage in soft tissue sarcomas of an extremity. (author)

  11. Random access to mobile networks with advanced error correction

    Science.gov (United States)

    Dippold, Michael

    1990-01-01

    A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.

  12. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    Science.gov (United States)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  13. PS-022 Complex automated medication systems reduce medication administration error rates in an acute medical ward

    DEFF Research Database (Denmark)

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2017-01-01

    Background Medication errors have received extensive attention in recent decades and are of significant concern to healthcare organisations globally. Medication errors occur frequently, and adverse events associated with medications are one of the largest causes of harm to hospitalised patients...... cabinet, automated dispensing and barcode medication administration; (2) non-patient specific automated dispensing and barcode medication administration. The occurrence of administration errors was observed in three 3 week periods. The error rates were calculated by dividing the number of doses with one...

  14. Optimization of intelligent infusion pump technology to minimize vasopressor pump programming errors.

    Science.gov (United States)

    Vadiei, Nina; Shuman, Carrie A; Murthy, Manasa S; Daley, Mitchell J

    2017-08-01

    There is a lack of data evaluating the impact of hard limit implementation into intelligent infusion pump technology (IIPT). The purpose of this study was to determine if incorporation of vasopressor upper hard limits (UHL) into IIPT increases efficacy of alerts by preventing pump programming errors. Retrospective review from five hospitals within a single healthcare network between April 1, 2013 and May 31, 2014. A total of 65,680 vasopressor data entries were evaluated; 19,377 prior to hard limit implementation and 46,303 after hard limit implementation. The primary outcome was the percent of effective alerts. The secondary outcome was the proportional dose increase from the soft limit provided. A reduction in alert rate occurred after incorporation of hard limits to the IIPT drug library (pre-UHL 4.7% vs. post-UHL 4.0%) with a subsequent increase in the number of errors prevented as represented by a higher effective alert rate (pre-UHL 23.0% vs. post-UHL 37.3%; p < 0.001). The proportional dose increase was significantly reduced (pre-UHL 188% ± 380%] vs. post-UHL 95% ± 128%; p < 0.001). Incorporation of UHLs into IIPT in a multi-site health system with varying intensive care unit and emergency department acuity increases alert effectiveness, reduces dosing errors, and reduces the magnitude of dosing errors that reach the patient.

  15. Considering the role of time budgets on copy-error rates in material culture traditions: an experimental assessment.

    Science.gov (United States)

    Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J

    2014-01-01

    Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural 'mutation', can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such 'time constraints' affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D 'target handaxe form' using a standardized foam block and a plastic knife. Three distinct 'time conditions' were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific 'threshold' might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a 'threshold' effect, below which mutation rates increase more markedly. Our results also suggest that 'time budgets' available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, 'time-budgeting' factors need to be given greater consideration in evolutionary models of material culture change.

  16. Hydraulic Soft Yaw System Load Reduction and Prototype Results

    DEFF Research Database (Denmark)

    Stubkier, Søren; Pedersen, Henrik C.; Markussen, Kristian

    2013-01-01

    Introducing a hydraulic soft yaw concept for wind turbines leads to significant load reductions in the wind turbine structure. The soft yaw system operates as a shock absorption system on a car, hence absorbing the loading from turbulent wind conditions instead of leading them into the stiff wind...... turbine structure. Results presented shows fatigue reductions of up to 40% and ultimate load reduction of up to 19%. The ultimate load reduction increases even more when the over load protection system in the hydraulic soft yaw system is introduced and results show how the exact extreme load cut off...... operates. Further it is analyzed how the soft yaw system influence the power production of the turbine. It is shown that the influence is minimal, but at larger yaw errors the effect is possitive. Due to the implemeted functions in the hydraulic soft yaw system such as even load distribution on the pinions...

  17. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  18. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    Science.gov (United States)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  19. Modeling of Bit Error Rate in Cascaded 2R Regenerators

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper

    2006-01-01

    and the regenerating nonlinearity is investigated. It is shown that an increase in nonlinearity can compensate for an increase in noise figure or decrease in signal power. Furthermore, the influence of the improvement in signal extinction ratio along the cascade and the importance of choosing the proper threshold......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...

  20. High strain rate characterization of soft materials: past, present and possible futures

    Science.gov (United States)

    Siviour, Clive

    2015-06-01

    The high strain rate properties of low impedance materials have long been of interest to the community: the very first paper by Kolsky on his eponymous bars included data from man-made polymers and natural rubber. However, it has also long been recognized that characterizing soft or low impedance specimens under dynamic loading presents a number of challenges, mainly owing to the low sound speed in, and low stresses supported by, these materials. Over the past 20 years, significant progress has been made in high rate testing techniques, including better experimental design, more sensitive data acquisition and better understanding of specimen behavior. Further, a new generation of techniques, in which materials are characterized using travelling waves, rather than in a state of static equilibrium, promise to turn those properties that were previously a drawback into an advantage. This paper will give an overview of the history of high rate characterization, the current state of the art after an exciting couple of decades and some of the techniques currently being developed that have the potential to offer increased quality data in the future.

  1. Error-free 5.1 Tbit/s data generation on a single-wavelength channel using a 1.28 Tbaud symbol rate

    DEFF Research Database (Denmark)

    Mulvad, Hans Christian Hansen; Galili, Michael; Oxenløwe, Leif Katsuo

    2009-01-01

    We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER......We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER...

  2. Comparison of responses of thermoluminescent dosemeters irradiated by soft x-rays at very low and very high dose rate levels

    International Nuclear Information System (INIS)

    Pietrikova-Farnikova, M.; Krasa, J.; Juha, L.

    1994-01-01

    Recent great progress in construction and application of bright sources of soft X-rays gave a strong impetus for the development of methods of their dosimetric diagnostics. The soft X-ray sources are primarily represented by synchrotron radiation sources and by sources based on laser-produced plasma, including X-ray lasers. Their characteristics spread over a very wide region of photon energies, peak and average powers and densities. From our preliminary experiments it follows that thermoluminescent dosemeters can serve as a suitable tool for the determination of these characteristics. Problem lies in the fact that routine use of the thermoluminescent dosemeters for the dosimetry of soft X-rays requires their spectral calibration, which can be carried out with low peak power sources (synchrotron radiation and radionuclide sources). On the contrary, many important sources, especially these based on laser-produced plasmas, exhibit a very high peak power, i.e. dosemeters are irradiated at extremely high dose rate. In comparative experiments carried out with laser-produced plasmas and radionuclides using TLD 200 (CaF 2 :Dy) and GR 200A (LiF:Mg,Cu,P) it was satisfactorily proven that total thermoluminescent signals are independent of the dose rate. Dependence of glow curve shapes on the dose, dose rate and photon energy were equally determined

  3. PERBANDINGAN BIT ERROR RATE KODE REED-SOLOMON DENGAN KODE BOSE-CHAUDHURI-HOCQUENGHEM MENGGUNAKAN MODULASI 32-FSK

    Directory of Open Access Journals (Sweden)

    Eva Yovita Dwi Utami

    2016-11-01

    Full Text Available Kode Reed-Solomon (RS dan kode Bose-Chaudhuri-Hocquenghem (BCH merupakan kode pengoreksi error yang termasuk dalam jenis kode blok siklis. Kode pengoreksi error diperlukan pada sistem komunikasi untuk memperkecil error pada informasi yang dikirimkan. Dalam makalah ini, disajikan hasil penelitian kinerja BER sistem komunikasi yang menggunakan kode RS, kode BCH, dan sistem yang tidak menggunakan kode RS dan kode BCH, menggunakan modulasi 32-FSK pada kanal Additive White Gaussian Noise (AWGN, Rayleigh dan Rician. Kemampuan memperkecil error diukur menggunakan nilai Bit Error Rate (BER yang dihasilkan. Hasil penelitian menunjukkan bahwa kode RS seiring dengan penambahan nilai SNR, menurunkan nilai BER yang lebih curam bila dibandingkan sistem dengan kode BCH. Sedangkan kode BCH memberikan keunggulan saat SNR bernilai kecil, memiliki BER lebih baik daripada sistem dengan kode RS.

  4. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    Science.gov (United States)

    2014-04-01

    Integral Role in Soft Tissue Mechanics, K. Troyer, D. Estep, and C. Puttlitz, Acta Biomaterialia 8 (201 2), 234-244 • A posteriori analysis of multi rate...2013, submitted • A posteriori error estimation for the Lax -Wendroff finite difference scheme, J. B. Collins, D. Estep, and S. Tavener, Journal of...oped over neArly six decades of activity and the major developments form a highly inter- connected web. We do not. ətternpt to review the history of

  5. Error-rate performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2011-09-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.

  6. Errors of car wheels rotation rate measurement using roller follower on test benches

    Science.gov (United States)

    Potapov, A. S.; Svirbutovich, O. A.; Krivtsov, S. N.

    2018-03-01

    The article deals with rotation rate measurement errors, which depend on the motor vehicle rate, on the roller, test benches. Monitoring of the vehicle performance under operating conditions is performed on roller test benches. Roller test benches are not flawless. They have some drawbacks affecting the accuracy of vehicle performance monitoring. Increase in basic velocity of the vehicle requires increase in accuracy of wheel rotation rate monitoring. It determines the degree of accuracy of mode identification for a wheel of the tested vehicle. To ensure measurement accuracy for rotation velocity of rollers is not an issue. The problem arises when measuring rotation velocity of a car wheel. The higher the rotation velocity of the wheel is, the lower the accuracy of measurement is. At present, wheel rotation frequency monitoring on roller test benches is carried out by following-up systems. Their sensors are rollers following wheel rotation. The rollers of the system are not kinematically linked to supporting rollers of the test bench. The roller follower is forced against the wheels of the tested vehicle by means of a spring-lever mechanism. Experience of the test bench equipment operation has shown that measurement accuracy is satisfactory at small rates of vehicles diagnosed on roller test benches. With a rising diagnostics rate, rotation velocity measurement errors occur in both braking and pulling modes because a roller spins about a tire tread. The paper shows oscillograms of changes in wheel rotation velocity and rotation velocity measurement system’s signals when testing a vehicle on roller test benches at specified rates.

  7. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  8. Soft error rate estimations of the Kintex-7 FPGA within the ATLAS Liquid Argon (LAr) Calorimeter

    International Nuclear Information System (INIS)

    Wirthlin, M J; Harding, A; Takai, H

    2014-01-01

    This paper summarizes the radiation testing performed on the Xilinx Kintex-7 FPGA in an effort to determine if the Kintex-7 can be used within the ATLAS Liquid Argon (LAr) Calorimeter. The Kintex-7 device was tested with wide-spectrum neutrons, protons, heavy-ions, and mixed high-energy hadron environments. The results of these tests were used to estimate the configuration ram and block ram upset rate within the ATLAS LAr. These estimations suggest that the configuration memory will upset at a rate of 1.1 × 10 −10 upsets/bit/s and the bram memory will upset at a rate of 9.06 × 10 −11 upsets/bit/s. For the Kintex 7K325 device, this translates to 6.85 × 10 −3 upsets/device/s for configuration memory and 1.49 × 10 −3 for block memory

  9. The study of error for analysis in dynamic image from the error of count rates in Nal (Tl) scintillation camera

    International Nuclear Information System (INIS)

    Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam; Park, Hoon Hee

    2013-01-01

    This study is aimed to evaluate the effect of T 1/2 upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9 9m TcO 4 - of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ 2 test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T 1/2 error from change of gradient with -0.25% to +0.25%, if T 1/2 is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T 1/2 error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation measurement. Especially, we found a

  10. Figure tolerance of a Wolter type I mirror for a soft-x-ray microscope

    International Nuclear Information System (INIS)

    Chon, Kwon Su; Namba, Yoshiharu; Yoon, Kwon-Ha

    2007-01-01

    The demand for an x-ray microscope has received much attention because of the desire to study living cells at a high resolution and in a hydrated environment. A Wolter type I mirror used for soft-x-ray microscope optics has many advantages. From the mirror fabrication point of view, it is necessary to perform tolerance analysis, particularly with respect to figure errors that considerably degrade the image quality.The figure tolerance of a Wolter type I mirror for a biological application in terms of the image quality and the state-of-the-art fabrication technology is discussed. The figure errors rapidly destroyed the image quality, and the required slope error depended on the detector used in the soft-x-ray microscope

  11. Residents' Ratings of Their Clinical Supervision and Their Self-Reported Medical Errors: Analysis of Data From 2009.

    Science.gov (United States)

    Baldwin, DeWitt C; Daugherty, Steven R; Ryan, Patrick M; Yaghmour, Nicholas A; Philibert, Ingrid

    2018-04-01

    Medical errors and patient safety are major concerns for the medical and medical education communities. Improving clinical supervision for residents is important in avoiding errors, yet little is known about how residents perceive the adequacy of their supervision and how this relates to medical errors and other education outcomes, such as learning and satisfaction. We analyzed data from a 2009 survey of residents in 4 large specialties regarding the adequacy and quality of supervision they receive as well as associations with self-reported data on medical errors and residents' perceptions of their learning environment. Residents' reports of working without adequate supervision were lower than data from a 1999 survey for all 4 specialties, and residents were least likely to rate "lack of supervision" as a problem. While few residents reported that they received inadequate supervision, problems with supervision were negatively correlated with sufficient time for clinical activities, overall ratings of the residency experience, and attending physicians as a source of learning. Problems with supervision were positively correlated with resident reports that they had made a significant medical error, had been belittled or humiliated, or had observed others falsifying medical records. Although working without supervision was not a pervasive problem in 2009, when it happened, it appeared to have negative consequences. The association between inadequate supervision and medical errors is of particular concern.

  12. Bipolar soft connected, bipolar soft disconnected and bipolar soft compact spaces

    Directory of Open Access Journals (Sweden)

    Muhammad Shabir

    2017-06-01

    Full Text Available Bipolar soft topological spaces are mathematical expressions to estimate interpretation of data frameworks. Bipolar soft theory considers the core features of data granules. Bipolarity is important to distinguish between positive information which is guaranteed to be possible and negative information which is forbidden or surely false. Connectedness and compactness are the most important fundamental topological properties. These properties highlight the main features of topological spaces and distinguish one topology from another. Taking this into account, we explore the bipolar soft connectedness, bipolar soft disconnectedness and bipolar soft compactness properties for bipolar soft topological spaces. Moreover, we introduce the notion of bipolar soft disjoint sets, bipolar soft separation, and bipolar soft hereditary property and study on bipolar soft connected and disconnected spaces. By giving the detailed picture of bipolar soft connected and disconnected spaces we investigate bipolar soft compact spaces and derive some results related to this concept.

  13. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    International Nuclear Information System (INIS)

    Chau, H.F.

    2002-01-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1√(5)≅27.6%, thereby making it the most error resistant scheme known to date

  14. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei; Ding, Xiaoli; Lu, Zhong; Jung, Hyungsup; Hu, Jun; Feng, Guangcai

    2014-01-01

    be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long

  15. [The effectiveness of error reporting promoting strategy on nurse's attitude, patient safety culture, intention to report and reporting rate].

    Science.gov (United States)

    Kim, Myoungsoo

    2010-04-01

    The purpose of this study was to examine the impact of strategies to promote reporting of errors on nurses' attitude to reporting errors, organizational culture related to patient safety, intention to report and reporting rate in hospital nurses. A nonequivalent control group non-synchronized design was used for this study. The program was developed and then administered to the experimental group for 12 weeks. Data were analyzed using descriptive analysis, X(2)-test, t-test, and ANCOVA with the SPSS 12.0 program. After the intervention, the experimental group showed significantly higher scores for nurses' attitude to reporting errors (experimental: 20.73 vs control: 20.52, F=5.483, p=.021) and reporting rate (experimental: 3.40 vs control: 1.33, F=1998.083, porganizational culture and intention to report. The study findings indicate that strategies that promote reporting of errors play an important role in producing positive attitudes to reporting errors and improving behavior of reporting. Further advanced strategies for reporting errors that can lead to improved patient safety should be developed and applied in a broad range of hospitals.

  16. Soft systems methodology: other voices

    OpenAIRE

    Holwell, Sue

    2000-01-01

    This issue of Systemic Practice and Action Research, celebrating the work of Peter Checkland, in the particular nature and development of soft systems methodology (SSM), would not have happened unless the work was seen by others as being important. No significant contribution to thinking happens without a secondary literature developing. Not surprisingly, many commentaries have accompanied the ongoing development of SSM. Some of these are insightful, some full of errors, and some include both...

  17. Estimating patient-specific soft-tissue properties in a TKA knee.

    Science.gov (United States)

    Ewing, Joseph A; Kaufman, Michelle K; Hutter, Erin E; Granger, Jeffrey F; Beal, Matthew D; Piazza, Stephen J; Siston, Robert A

    2016-03-01

    Surgical technique is one factor that has been identified as critical to success of total knee arthroplasty. Researchers have shown that computer simulations can aid in determining how decisions in the operating room generally affect post-operative outcomes. However, to use simulations to make clinically relevant predictions about knee forces and motions for a specific total knee patient, patient-specific models are needed. This study introduces a methodology for estimating knee soft-tissue properties of an individual total knee patient. A custom surgical navigation system and stability device were used to measure the force-displacement relationship of the knee. Soft-tissue properties were estimated using a parameter optimization that matched simulated tibiofemoral kinematics with experimental tibiofemoral kinematics. Simulations using optimized ligament properties had an average root mean square error of 3.5° across all tests while simulations using generic ligament properties taken from literature had an average root mean square error of 8.4°. Specimens showed large variability among ligament properties regardless of similarities in prosthetic component alignment and measured knee laxity. These results demonstrate the importance of soft-tissue properties in determining knee stability, and suggest that to make clinically relevant predictions of post-operative knee motions and forces using computer simulations, patient-specific soft-tissue properties are needed. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  18. [Diagnostic and organizational error in head injuries].

    Science.gov (United States)

    Zaba, Czesław; Zaba, Zbigniew; Swiderski, Paweł; Lorkiewicz-Muszyíska, Dorota

    2009-01-01

    The study aimed at presenting a case of a diagnostic and organizational error involving lack of detection of foreign body presence in the soft tissues of the head. Head radiograms in two projections clearly demonstrated foreign bodies that resembled in shape flattened bullets, which could not have been missed upon evaluation of the X-rays. On the other hand, description of the radiograms entered by the attending physicians to the patient's medical record indicated an absence of traumatic injuries or foreign bodies. In the opinion of the authors, the case in question involved a diagnostic error: the doctors failed to detect the presence of foreign bodies in the head. The organizational error involved the failure of radiogram evaluation performed by a radiologist.

  19. Kurzweil Reading Machine: A Partial Evaluation of Its Optical Character Recognition Error Rate.

    Science.gov (United States)

    Goodrich, Gregory L.; And Others

    1979-01-01

    A study designed to assess the ability of the Kurzweil reading machine (a speech reading device for the visually handicapped) to read three different type styles produced by five different means indicated that the machines tested had different error rates depending upon the means of producing the copy and upon the type style used. (Author/CL)

  20. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  1. Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access

    KAUST Repository

    Zafar, Ammar

    2012-12-29

    In this paper, we present an optimal resource allocation scheme (ORA) for an all-participate(AP) cognitive relay network that minimizes the symbol error rate (SER). The SER is derived and different constraints are considered on the system. We consider the cases of both individual and global power constraints, individual constraints only and global constraints only. Numerical results show that the ORA scheme outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).

  2. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  3. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery

    International Nuclear Information System (INIS)

    Rottmann, Joerg; Berbeco, Ross; Keall, Paul

    2013-01-01

    Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient.Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps.Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm.Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time

  4. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery

    Energy Technology Data Exchange (ETDEWEB)

    Rottmann, Joerg; Berbeco, Ross [Brigham and Women' s Hospital, Dana Farber-Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Keall, Paul [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney NSW 2006 (Australia)

    2013-09-15

    Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient.Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps.Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm.Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.

  5. Error rate on the director's task is influenced by the need to take another's perspective but not the type of perspective.

    Science.gov (United States)

    Legg, Edward W; Olivier, Laure; Samuel, Steven; Lurz, Robert; Clayton, Nicola S

    2017-08-01

    Adults are prone to responding erroneously to another's instructions based on what they themselves see and not what the other person sees. Previous studies have indicated that in instruction-following tasks participants make more errors when required to infer another's perspective than when following a rule. These inference-induced errors may occur because the inference process itself is error-prone or because they are a side effect of the inference process. Crucially, if the inference process is error-prone, then higher error rates should be found when the perspective to be inferred is more complex. Here, we found that participants were no more error-prone when they had to judge how an item appeared (Level 2 perspective-taking) than when they had to judge whether an item could or could not be seen (Level 1 perspective-taking). However, participants were more error-prone in the perspective-taking variants of the task than in a version that only required them to follow a rule. These results suggest that having to represent another's perspective induces errors when following their instructions but that error rates are not directly linked to errors in inferring another's perspective.

  6. On the symmetric α-stable distribution with application to symbol error rate calculations

    KAUST Repository

    Soury, Hamza

    2016-12-24

    The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.

  7. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    Science.gov (United States)

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  8. Attitudes of Mashhad Public Hospital's Nurses and Midwives toward the Causes and Rates of Medical Errors Reporting.

    Science.gov (United States)

    Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh

    2017-03-01

    Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer

  9. Smart Braid Feedback for the Closed-Loop Control of Soft Robotic Systems.

    Science.gov (United States)

    Felt, Wyatt; Chin, Khai Yi; Remy, C David

    2017-09-01

    This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.

  10. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  11. Development of a novel neutron detector for imaging and analysis

    International Nuclear Information System (INIS)

    Darambara, D.G.; Beach, A.C.; Spyrou, N.M.

    1993-01-01

    A hardware system employing dynamic Random Access Memory (dRAM) has been designed to make possible the detection of neutrons. One recognised difficulty with dynamic memory devices is the alpha-particle problem. That is alpha-particle 'contamination' present within the dRAM encapsulating material may interact sufficiently as to corrupt stored data. These corruptions, 'known as soft errors', may be induced in dRAMs by the interaction of charged particles with the chip itself as a basis for system function. A preliminary feasibility study has been carried out to use dynamic RAMs as alpha-particle detectors. The initial system tests provide information upon detection efficiency, soft error reading rate, energy dependence of the soft error rate and the soft error reading rate, energy dependence of the soft error rate and the soft error operating bias relationship. These findings highlight the usefulness of such a device in neutron dosimetry, imaging and analysis, by using a neutron converter with a high cross section for the (n, α) capture reaction. (author) 20 refs.; 8 figs

  12. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  13. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  14. Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2005-01-01

    Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...

  15. Tax revenue and inflation rate predictions in Banda Aceh using Vector Error Correction Model (VECM)

    Science.gov (United States)

    Maulia, Eva; Miftahuddin; Sofyan, Hizir

    2018-05-01

    A country has some important parameters to achieve the welfare of the economy, such as tax revenues and inflation. One of the largest revenues of the state budget in Indonesia comes from the tax sector. Besides, the rate of inflation occurring in a country can be used as one measure, to measure economic problems that the country facing. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the relationship and forecasting tax revenue and inflation rate. VECM (Vector Error Correction Model) was chosen as the method used in this research, because of the data used in the form of multivariate time series data. This study aims to produce a VECM model with optimal lag and to predict the tax revenue and inflation rate of the VECM model. The results show that the best model for data of tax revenue and the inflation rate in Banda Aceh City is VECM with 3rd optimal lag or VECM (3). Of the seven models formed, there is a significant model that is the acceptance model of income tax. The predicted results of tax revenue and the inflation rate in Kota Banda Aceh for the next 6, 12 and 24 periods (months) obtained using VECM (3) are considered valid, since they have a minimum error value compared to other models.

  16. Iterative Soft Decision Interference Cancellation for DS-CDMA Employing the Distribution of Interference

    Directory of Open Access Journals (Sweden)

    Gerstacker WolfgangH

    2010-01-01

    Full Text Available A well-known receiver strategy for direct-sequence code-division multiple-access (DS-CDMA transmission is iterative soft decision interference cancellation. For calculation of soft estimates used for cancellation, the distribution of residual interference is commonly assumed to be Gaussian. In this paper, we analyze matched filter-based iterative soft decision interference cancellation (MF ISDIC when utilizing an approximation of the actual probability density function (pdf of residual interference. In addition, a hybrid scheme is proposed, which reduces computational complexity by considering the strongest residual interferers according to their pdf while the Gaussian assumption is applied to the weak residual interferers. It turns out that the bit error ratio decreases already noticeably when only a small number of residual interferers is regarded according to their pdf. For the considered DS-CDMA transmission the bit error ratio decreases by 80% for high signal-to-noise ratios when modeling all residual interferers but the strongest three to be Gaussian distributed.

  17. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  18. Soft x-ray virtual diagnostics for tokamak simulations

    Science.gov (United States)

    Kim, J. S.; Zhao, L.; Bogatu, I. N.; In, Y.; Turnbull, A.; Osborne, T.; Maraschek, M.; Comer, K.

    2009-11-01

    The numerical toolset, FAR-TECH Virtual Diagnostic Utility, for generating virtual experimental data based on theoretical models and comparing it with experimental data, has been developed for soft x-ray diagnostics on DIII-D. The virtual (or synthetic) soft x-ray signals for a sample DIII-D discharge are compared with the experimental data. The plasma density and temperature radial profiles needed in the soft x-ray signal modeling are obtained from experimental data, i.e., from Thomson scattering and electron cyclotron emission. The virtual soft x-ray diagnostics for the equilibriums have a good agreement with the experimental data. The virtual diagnostics based on an ideal linear instability also agree reasonably well with the experimental data. The agreements are good enough to justify the methodology presented here for utilizing virtual diagnostics for routine comparison of experimental data. The agreements also motivate further detailed simulations with improved physical models such as the nonideal magnetohydrodynamics contributions (resistivity, viscosity, nonaxisymmetric error fields, etc.) and other nonlinear effects, which can be tested by virtual diagnostics with various stability modeling.

  19. Soft x-ray virtual diagnostics for tokamak simulations

    International Nuclear Information System (INIS)

    Kim, J. S.; Zhao, L.; Bogatu, I. N.; In, Y.; Turnbull, A.; Osborne, T.; Maraschek, M.; Comer, K.

    2009-01-01

    The numerical toolset, FAR-TECH Virtual Diagnostic Utility, for generating virtual experimental data based on theoretical models and comparing it with experimental data, has been developed for soft x-ray diagnostics on DIII-D. The virtual (or synthetic) soft x-ray signals for a sample DIII-D discharge are compared with the experimental data. The plasma density and temperature radial profiles needed in the soft x-ray signal modeling are obtained from experimental data, i.e., from Thomson scattering and electron cyclotron emission. The virtual soft x-ray diagnostics for the equilibriums have a good agreement with the experimental data. The virtual diagnostics based on an ideal linear instability also agree reasonably well with the experimental data. The agreements are good enough to justify the methodology presented here for utilizing virtual diagnostics for routine comparison of experimental data. The agreements also motivate further detailed simulations with improved physical models such as the nonideal magnetohydrodynamics contributions (resistivity, viscosity, nonaxisymmetric error fields, etc.) and other nonlinear effects, which can be tested by virtual diagnostics with various stability modeling.

  20. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  1. A HIGH REPETITION RATE VUV-SOFT X-RAY FEL CONCEPT

    International Nuclear Information System (INIS)

    Corlett, J.; Byrd, J.; Fawley, W.M.; Gullans, M.; Li, D.; Lidia, S.M.; Padmore, H.; Penn, G.; Pogorelov, I.; Qiang, J.; Robin, D.; Sannibale, F.; Staples, J.W.; Steier, C.; Venturini, M.; Virostek, S.; Wan, W.; Wells, R.; Wilcox, R.; Wurtele, J.; Zholents, A.

    2007-01-01

    We report on design studies for a seeded FEL light source that is responsive to the scientific needs of the future. The FEL process increases radiation flux by several orders of magnitude above existing incoherent sources, and offers the additional enhancements attainable by optical manipulations of the electron beam: control of the temporal duration and bandwidth of the coherent output, reduced gain length in the FEL, utilization of harmonics to attain shorter wavelengths, and precise synchronization of the x-ray pulse with seed laser systems. We describe an FEL facility concept based on a high repetition rate RF photocathode gun, that would allow simultaneous operation of multiple independent FEL's, each producing high average brightness, tunable over the VUV-soft x-ray range, and each with individual performance characteristics determined by the configuration of the FEL. SASE, enhanced-SASE (ESASE), seeded, harmonic generation, and other configurations making use of optical manipulations of the electron beam may be employed, providing a wide range of photon beam properties to meet varied user demands

  2. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    Science.gov (United States)

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  3. Error rates of a full-duplex system over EGK fading channels subject to laplacian interference

    KAUST Repository

    Soury, Hamza; Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function

  4. Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software

    Science.gov (United States)

    Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg

    2017-09-01

    100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.

  5. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Kugelman

    Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.

  6. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    Science.gov (United States)

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  7. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    Science.gov (United States)

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  8. Scintillator power meter applied on Z-pinch plasma soft X-ray yield measurement

    International Nuclear Information System (INIS)

    Zhang Siqun; Huang Xianbin; Li Jing; Dan Jiakun; Li Jun; Yang Libing; Cui Mingqi; Zhao Yidong

    2010-01-01

    This paper presents the configuration and measuring parameters of scintillator power meter applied in Z-pinch plasma soft X-ray yield measurement on Yang accelerator. It also introduces the calibration experiment on BSRF, and analyzes the defect of the power meter from calibration results, the possible errors and feasible method for correcting the errors. The measuring results are revised according to spectrum acquired from Dante spectrometer. The revised discrepancy of two instruments is decreased from over 30% to subter-15%. Finally, the result of yield measurement of the puff Z-pinch X-ray radiation is reported as well, i.e., hundreds of Joule, multigigawatt levels of soft X ray radiation were produced by puff Z-pinch on Yang accelerator. (authors)

  9. A Novel Nonlinear Parameter Estimation Method of Soft Tissues

    Directory of Open Access Journals (Sweden)

    Qianqian Tong

    2017-12-01

    Full Text Available The elastic parameters of soft tissues are important for medical diagnosis and virtual surgery simulation. In this study, we propose a novel nonlinear parameter estimation method for soft tissues. Firstly, an in-house data acquisition platform was used to obtain external forces and their corresponding deformation values. To provide highly precise data for estimating nonlinear parameters, the measured forces were corrected using the constructed weighted combination forecasting model based on a support vector machine (WCFM_SVM. Secondly, a tetrahedral finite element parameter estimation model was established to describe the physical characteristics of soft tissues, using the substitution parameters of Young’s modulus and Poisson’s ratio to avoid solving complicated nonlinear problems. To improve the robustness of our model and avoid poor local minima, the initial parameters solved by a linear finite element model were introduced into the parameter estimation model. Finally, a self-adapting Levenberg–Marquardt (LM algorithm was presented, which is capable of adaptively adjusting iterative parameters to solve the established parameter estimation model. The maximum absolute error of our WCFM_SVM model was less than 0.03 Newton, resulting in more accurate forces in comparison with other correction models tested. The maximum absolute error between the calculated and measured nodal displacements was less than 1.5 mm, demonstrating that our nonlinear parameters are precise.

  10. Performance analysis for the bit-error rate of SAC-OCDMA systems

    Science.gov (United States)

    Feng, Gang; Cheng, Wenqing; Chen, Fujun

    2015-09-01

    Under low power, Gaussian statistics by invoking the central limit theorem is feasible to predict the upper bound in the spectral-amplitude-coding optical code division multiple access (SAC-OCDMA) system. However, this case severely underestimates the bit-error rate (BER) performance of the system under high power assumption. Fortunately, the exact negative binomial (NB) model is a perfect replacement for the Gaussian model in the prediction and evaluation. Based on NB statistics, a more accurate closed-form expression is analyzed and derived for the SAC-OCDMA system. The experiment shows that the obtained expression provides a more precise prediction of the BER performance under the low and high power assumptions.

  11. Standardized error severity score (ESS) ratings to quantify risk associated with child restraint system (CRS) and booster seat misuse.

    Science.gov (United States)

    Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley

    2017-11-17

    Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation

  12. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  13. Soft x-ray source by laser produced Xe plasma

    International Nuclear Information System (INIS)

    Amano, Sho; Masuda, Kazuya; Miyamoto, Shuji; Mochizuki, Takayasu

    2010-01-01

    The laser plasma soft X-ray source in the wavelength rage of 5-17 nm was developed, which consisted of the rotating drum system supplying cryogenic Xe target and the high repetition rate pulse Nd:YAG slab laser. We found the maximum conversion efficiency of 30% and it demonstrated the soft X-ray generation with the high repetition rate pulse of 320 pps and the high average power of 20 W. The soft X-ray cylindrical mirror was developed and successfully focused the soft X-ray with an energy intensity of 1.3 mJ/cm 2 . We also succeeded in the plasma debris mitigation with Ar gas. This will allow a long lifetime of the mirror and a focusing power intensity of 400 mW/cm 2 with 320 pps. The high power soft X-ray is useful for various applications. (author)

  14. Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise

    KAUST Repository

    Souri, Hamza; Alouini, Mohamed-Slim

    2015-01-01

    The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.

  15. Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise

    KAUST Repository

    Souri, Hamza

    2015-06-01

    The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.

  16. Soft shoulders ahead: spurious signatures of soft and partial selective sweeps result from linked hard sweeps.

    Science.gov (United States)

    Schrider, Daniel R; Mendes, Fábio K; Hahn, Matthew W; Kern, Andrew D

    2015-05-01

    Characterizing the nature of the adaptive process at the genetic level is a central goal for population genetics. In particular, we know little about the sources of adaptive substitution or about the number of adaptive variants currently segregating in nature. Historically, population geneticists have focused attention on the hard-sweep model of adaptation in which a de novo beneficial mutation arises and rapidly fixes in a population. Recently more attention has been given to soft-sweep models, in which alleles that were previously neutral, or nearly so, drift until such a time as the environment shifts and their selection coefficient changes to become beneficial. It remains an active and difficult problem, however, to tease apart the telltale signatures of hard vs. soft sweeps in genomic polymorphism data. Through extensive simulations of hard- and soft-sweep models, here we show that indeed the two might not be separable through the use of simple summary statistics. In particular, it seems that recombination in regions linked to, but distant from, sites of hard sweeps can create patterns of polymorphism that closely mirror what is expected to be found near soft sweeps. We find that a very similar situation arises when using haplotype-based statistics that are aimed at detecting partial or ongoing selective sweeps, such that it is difficult to distinguish the shoulder of a hard sweep from the center of a partial sweep. While knowing the location of the selected site mitigates this problem slightly, we show that stochasticity in signatures of natural selection will frequently cause the signal to reach its zenith far from this site and that this effect is more severe for soft sweeps; thus inferences of the target as well as the mode of positive selection may be inaccurate. In addition, both the time since a sweep ends and biologically realistic levels of allelic gene conversion lead to errors in the classification and identification of selective sweeps. This

  17. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.

    Science.gov (United States)

    Rottmann, Joerg; Keall, Paul; Berbeco, Ross

    2013-09-01

    To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.

  18. Local Control Rates of Metastatic Renal Cell Carcinoma (RCC) to Thoracic, Abdominal, and Soft Tissue Lesions Using Stereotactic Body Radiotherapy (SBRT)

    International Nuclear Information System (INIS)

    Altoos, Basel; Amini, Arya; Yacoub, Muthanna; Bourlon, Maria T.; Kessler, Elizabeth E.; Flaig, Thomas W.; Fisher, Christine M.; Kavanagh, Brian D.; Lam, Elaine T.; Karam, Sana D.

    2015-01-01

    We report the radiographic response rate of SBRT compared to conventional fractionated radiotherapy (CF-EBRT) for thoracic, abdominal, skin and soft tissue RCC lesions treated at our institution. Fifty three lesions where included in the study (36 SBRT, 17 CF-EBRT), treated from 2004 to 2014 at our institution. We included patients that had thoracic, skin & soft tissue (SST), and abdominal metastases of histologically confirmed RCC. The most common SBRT fractionation was 50 Gy in 5 fractions. The median time of follow-up was 16 months (range 3–97 months). Median BED was 216.67 (range 66.67–460.0) for SBRT, and 60 (range 46.67–100.83) for CF-EBRT. Median radiographic local control rates at 12, 24, and 36 months were 100, 93.41, and 93.41 % for lesions treated with SBRT versus 62.02, 35.27 and 35.27 % for those treated with CF-EBRT (p < 0.001). Predictive factors for radiographic local control under univariate analysis included BED ≥ 100 Gy (HR, 0.048; 95 % CI, 0.006–0.382; p = 0.005), dose per fraction ≥ 9 Gy (HR, 0.631; 95 % CI, 0.429–0.931; p = 0.021), and gender (HR, 0.254; 95 % CI, 0.066–0.978; p = 0.048). Under multivariate analysis, there were no significant predictors for local control. Toxicity rates were low and equivalent in both groups, with no grade 4 or 5 side effects reported. SBRT is safe and effective for the treatment of RCC metastases to thoracic, abdominal and integumentary soft tissues. Radiographic response rates were greater and more durable using SBRT compared to CF-EBRT. Further prospective trials are needed to evaluate efficacy and safety of SBRT for RCC metastases

  19. Soft-Decision-Data Reshuffle to Mitigate Pulsed Radio Frequency Interference Impact on Low-Density-Parity-Check Code Performance

    Science.gov (United States)

    Ni, Jianjun David

    2011-01-01

    This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.

  20. Bit Error-Rate Minimizing Detector for Amplify-and-Forward Relaying Systems Using Generalized Gaussian Kernel

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-01-01

    In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.

  1. Spectrometry with high count rate for the study of the soft X-rays. Application for the plasma of WEGA

    International Nuclear Information System (INIS)

    Brouquet, P.

    1979-04-01

    The plasma of the WEGA torus, whose electron temperature varies between 0.5 and 1 keV, emits electromagnetic radiation extending to wavelengths of the order of 1A. Different improvements performed on a semi-conductor spectrometer have permitted the study of this emission in the soft X ray region (1 keV - 30 keV) at a count rate of 3.10 5 counts/s with an energy resolution of 350 eV. For each plasma shot, this diagnostic gives 4 measurements of the plasma electron temperature and of the effective charge, Zeff, with a time resolution of 5 ms. The values of the electron temperature and of the effective charge derived from the study of soft X rays are in agreement with those given by other diagnostic methods [fr

  2. Efficient Error Detection in Soft Data Fusion for Cooperative Spectrum Sensing

    KAUST Repository

    Saqib Bhatti, Dost Muhammad

    2018-03-18

    The primary objective of cooperative spectrum sensing (CSS) is to determine whether a particular spectrum is occupied by a licensed user or not, so that unlicensed users called secondary users (SUs) can utilize that spectrum, if it is not occupied. For CSS, all SUs report their sensing information through reporting channel to the central base station called fusion center (FC). During transmission, some of the SUs are subjected to fading and shadowing, due to which the overall performance of CSS is degraded. We have proposed an algorithm which uses error detection technique on sensing measurement of all SUs. Each SU is required to re-transmit the sensing data to the FC, if error is detected on it. Our proposed algorithm combines the sensing measurement of limited number of SUs. Using Proposed algorithm, we have achieved the improved probability of detection (PD) and throughput. The simulation results compare the proposed algorithm with conventional scheme.

  3. In vitro measurement of CT density and estimation of stenosis related to coronary soft plaque at 100 kV and 120 kV on ECG-triggered scan

    Energy Technology Data Exchange (ETDEWEB)

    Horiguchi, Jun, E-mail: horiguch@hiroshima-u.ac.jp [Department of Clinical Radiology, Hiroshima University Hospital, 1-2-3, Kasumi-cho, Minami-ku, Hiroshima 734-8551 (Japan); Fujioka, Chikako, E-mail: fujioka@hiroshima-u.ac.jp [Department of Clinical Radiology, Hiroshima University Hospital, 1-2-3, Kasumi-cho, Minami-ku, Hiroshima 734-8551 (Japan); Kiguchi, Masao, E-mail: kiguchi@hiroshima-u.ac.jp [Department of Clinical Radiology, Hiroshima University Hospital, 1-2-3, Kasumi-cho, Minami-ku, Hiroshima 734-8551 (Japan); Yamamoto, Hideya, E-mail: hideyayama@hiroshima-u.ac.jp [Department of Cardiovascular Medicine, Hiroshima University Graduate School of Biomedical Sciences and Hiroshima University Hospital, 1-2-3, Kasumi-cho, Minami-ku, Hiroshima 734-8551 (Japan); Shen, Yun, E-mail: Yuna.Shen@ge.com [CT Lab of Great China, GE Healthcare, L12 and L15, Office Tower, Langham Place, 8 Argyle Street, Mongkok Kowloon (Hong Kong); Kihara, Yasuki, E-mail: ykihara@hiroshima-u.ac.jp [Department of Cardiovascular Medicine, Hiroshima University Graduate School of Biomedical Sciences and Hiroshima University Hospital, 1-2-3, Kasumi-cho, Minami-ku, Hiroshima 734-8551 (Japan)

    2011-02-15

    Purpose: The purpose of the study was to compare 100 kV and 120 kV prospective electrocardiograph (ECG)-triggered axial coronary 64-detector CT angiography (64-MDCTA) in soft plaque diagnosis. Materials and methods: Coronary artery models (n = 5) with artificial soft plaques (-32 HU to 53 HU at 120 kV) with three stenosis levels (25%, 50% and 75%) on a cardiac phantom (mimicking slim patient's environment) were scanned in heart rates of 55, 60 and 65 beats per minute (bpm). Four kinds of intracoronary enhancement (205 HU, 241 HU, 280 HU and 314 HU) were simulated. The soft plaque density and the measurement error of stenosis (in percentage), evaluated by two independent observers, were compared between 100 kV and 120 kV. The radiation dose was estimated. Results: Interobserver correlation of the measurement was excellent (density; r = 0.95 and stenosis measure; r = 0.97). Neither the density of soft plaque nor the measurement error of stenosis was different between 100 kV and 120 kV (p = 0.22 and 0.08). The estimated radiation doses were 2.0 mSv and 3.3 mSv (in 14 cm coverage) on 100 kV and 120 kV prospective ECG-triggered axial scans, respectively. Conclusion: The 100 kV prospective ECG-triggered coronary MDCTA has comparable performance to 120 kV coronary CTA in terms of soft plaque densitometry and measurement of stenosis, with a reduced effective dose of 2 mSv.

  4. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  5. On the Symbol Error Rate of M-ary MPSK over Generalized Fading Channels with Additive Laplacian Noise

    KAUST Repository

    Soury, Hamza

    2015-01-07

    This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].

  6. On the Symbol Error Rate of M-ary MPSK over Generalized Fading Channels with Additive Laplacian Noise

    KAUST Repository

    Soury, Hamza; Alouini, Mohamed-Slim

    2015-01-01

    This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].

  7. Triggering soft bombs at the LHC

    Science.gov (United States)

    Knapen, Simon; Griso, Simone Pagan; Papucci, Michele; Robinson, Dean J.

    2017-08-01

    Very high multiplicity, spherically-symmetric distributions of soft particles, with p T ˜ few×100 MeV, may be a signature of strongly-coupled hidden valleys that exhibit long, efficient showering windows. With traditional triggers, such `soft bomb' events closely resemble pile-up and are therefore only recorded with minimum bias triggers at a very low efficiency. We demonstrate a proof-of-concept for a high-level triggering strategy that efficiently separates soft bombs from pile-up by searching for a `belt of fire': a high density band of hits on the innermost layer of the tracker. Seeding our proposed high-level trigger with existing jet, missing transverse energy or lepton hardware-level triggers, we show that net trigger efficiencies of order 10% are possible for bombs of mass several × 100 GeV. We also consider the special case that soft bombs are the result of an exotic decay of the 125 GeV Higgs. The fiducial rate for `Higgs bombs' triggered in this manner is marginally higher than the rate achievable by triggering directly on a hard muon from associated Higgs production.

  8. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction

    Directory of Open Access Journals (Sweden)

    Boulesteix Anne-Laure

    2009-12-01

    Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.

  9. Determination of corrosion rate of reinforcement with a modulated guard ring electrode; analysis of errors due to lateral current distribution

    International Nuclear Information System (INIS)

    Wojtas, H.

    2004-01-01

    The main source of errors in measuring the corrosion rate of rebars on site is a non-uniform current distribution between the small counter electrode (CE) on the concrete surface and the large rebar network. Guard ring electrodes (GEs) are used in an attempt to confine the excitation current within a defined area. In order to better understand the functioning of modulated guard ring electrode and to assess its effectiveness in eliminating errors due to lateral spread of current signal from the small CE, measurements of the polarisation resistance performed on a concrete beam have been numerically simulated. Effect of parameters such as rebar corrosion activity, concrete resistivity, concrete cover depth and size of the corroding area on errors in the estimation of polarisation resistance of a single rebar has been examined. The results indicate that modulated GE arrangement fails to confine the lateral spread of the CE current within a constant area. Using the constant diameter of confinement for the calculation of corrosion rate may lead to serious errors when test conditions change. When high corrosion activity of rebar and/or local corrosion occur, the use of the modulated GE confinement may lead to significant underestimation of the corrosion rate

  10. Comparison of Bit Error Rate of Line Codes in NG-PON2

    Directory of Open Access Journals (Sweden)

    Tomas Horvath

    2016-05-01

    Full Text Available This article focuses on simulation and comparison of line codes NRZ (Non Return to Zero, RZ (Return to Zero and Miller’s code for NG-PON2 (Next-Generation Passive Optical Network Stage 2 using. Our article provides solutions with Q-factor, BER (Bit Error Rate, and bandwidth comparison. Line codes are the most important part of communication over the optical fibre. The main role of these codes is digital signal representation. NG-PON2 networks use optical fibres for communication that is the reason why OptSim v5.2 is used for simulation.

  11. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    International Nuclear Information System (INIS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-01-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  12. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)

    2016-11-15

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  13. Scaling prediction errors to reward variability benefits error-driven learning in humans.

    Science.gov (United States)

    Diederen, Kelly M J; Schultz, Wolfram

    2015-09-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease "adapters'" accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. Copyright © 2015 the American Physiological Society.

  14. Dual-energy X-ray absorptiometry: analysis of pediatric fat estimate errors due to tissue hydration effects.

    Science.gov (United States)

    Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B

    2000-12-01

    Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.

  15. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    Science.gov (United States)

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. A dissolution-diffusion sliding model for soft rock grains with hydro-mechanical effect

    Directory of Open Access Journals (Sweden)

    Z. Liu

    2018-06-01

    Full Text Available The deformation and failure of soft rock affected by hydro-mechanical (HM effect are one of the most concerns in geotechnical engineering, which are basically attributed to the grain sliding of soft rock. This study tried to develop a dissolution-diffusion sliding model for the typical red bed soft rock in South China. Based on hydration film, mineral dissolution and diffusion theory, and geochemical thermodynamics, a dissolution-diffusion sliding model with the HM effect was established to account for the sliding rate. Combined with the digital image processing technology, the relationship between the grain size of soft rock and the amplitude of sliding surface was presented. An equation for the strain rate of soft rocks under steady state was also derived. The reliability of the dissolution-diffusion sliding model was verified by triaxial creep tests on the soft rock with the HM coupling effect and by the relationship between the inversion average disjoining pressure and the average thickness of the hydration film. The results showed that the sliding rate of the soft rock grains was affected significantly by the waviness of sliding surface, the shear stress, and the average thickness of hydration film. The average grain size is essential for controlling the steady-state creep rate of soft rock. This study provides a new idea for investigating the deformation and failure of soft rock with the HM effect. Keywords: Soft rock, Hydro-mechanical (HM effect, Mineral dissolution-diffusion, Grain sliding model

  17. Soft x-ray tomography on the Alcator C tokamak

    International Nuclear Information System (INIS)

    Camacho, J.F.

    1985-06-01

    A soft x-ray tomography experiment has been performed on the Alcator C tokamak. An 80-chord array of detectors consisting of miniature PIN photodiodes was used to obtain tomographic reconstructions of the soft x-ray emissivity function's poloidal cross-section. The detectors are located around the periphery of the plasma at one toroidal location (top and bottom ports) and are capable of yielding useful information over a wide range of plasma operating parameters and conditions. The reconstruction algorithm employed makes no assumption whatsoever about plasma rotation, position, or symmetry. Its performance was tested, and it was found to work well and to be fairly insensitive to estimated levels of random and systematic errors in the data

  18. Perception-Based Tactile Soft Keyboard for the Touchscreen of Tablets

    Directory of Open Access Journals (Sweden)

    Kwangtaek Kim

    2018-01-01

    Full Text Available Most mobile devices equipped with touchscreens provide on-screen soft keyboard as an input method. However, many users are experiencing discomfort due to lack of physical feedback that causes slow typing speed and error-prone typing, as compared to the physical keyboard. To solve the problem, a platform-independent haptic soft keyboard suitable for tablet-sized touchscreens was proposed and developed. The platform-independent haptic soft keyboard was verified on both Android and Windows. In addition, a psychophysical experiment has been conducted to find an optimal strength of key click feedback on touchscreens, and the perception result was applied for making uniform tactile forces on touchscreens. The developed haptic soft keyboard can be easily integrated with existing tablets by putting the least amount of effort. The evaluation results confirm platform independency, fast tactile key click feedback, and uniform tactile force distribution on touchscreen with using only two piezoelectric actuators. The proposed system was developed on a commercial tablet (Mu Pad that has dual platforms (Android and Windows.

  19. On the symbol error rate of M-ary MPSK over generalized fading channels with additive Laplacian noise

    KAUST Repository

    Soury, Hamza

    2014-06-01

    This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox\\'s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.

  20. On the symbol error rate of M-ary MPSK over generalized fading channels with additive Laplacian noise

    KAUST Repository

    Soury, Hamza; Alouini, Mohamed-Slim

    2014-01-01

    This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox's H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.

  1. Impact of catheter reconstruction error on dose distribution in high dose rate intracavitary brachytherapy and evaluation of OAR doses

    International Nuclear Information System (INIS)

    Thaper, Deepak; Shukla, Arvind; Rathore, Narendra; Oinam, Arun S.

    2016-01-01

    In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this study is to evaluate the impact of catheter reconstruction error on dose distribution in CT based intracavitary brachytherapy planning and evaluation of its effect on organ at risk (OAR) like bladder, rectum and sigmoid and target volume High risk clinical target volume (HR-CTV)

  2. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    Science.gov (United States)

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  3. Frequency and Severity of Parenteral Nutrition Medication Errors at a Large Children's Hospital After Implementation of Electronic Ordering and Compounding.

    Science.gov (United States)

    MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery

    2016-04-01

    The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process

  4. Corrosion of aluminium in soft drinks.

    Science.gov (United States)

    Seruga, M; Hasenay, D

    1996-04-01

    The corrosion of aluminium (Al) in several brands of soft drinks (cola- and citrate-based drinks) has been studied, using an electrochemical method, namely potentiodynamic polarization. The results show that the corrosion of Al in soft drinks is a very slow, time-dependent and complex process, strongly influenced by the passivation, complexation and adsorption processes. The corrosion of Al in these drinks occurs principally due to the presence of acids: citric acid in citrate-based drinks and orthophosphoric acid in cola-based drinks. The corrosion rate of Al rose with an increase in the acidity of soft drinks, i.e. with increase of the content of total acids. The corrosion rates are much higher in the cola-based drinks than those in citrate-based drinks, due to the facts that: (1) orthophosphoric acid is more corrosive to Al than is citric acid, (2) a quite different passive oxide layer (with different properties) is formed on Al, depending on whether the drink is cola or citrate based. The method of potentiodynamic polarization was shown as being very suitable for the study of corrosion of Al in soft drinks, especially if it is combined with some non-electrochemical method, e.g. graphite furnace atomic absorption spectrometry (GFAAS).

  5. Soft Ultrathin Electronics Innervated Adaptive Fully Soft Robots.

    Science.gov (United States)

    Wang, Chengjun; Sim, Kyoseung; Chen, Jin; Kim, Hojin; Rao, Zhoulyu; Li, Yuhang; Chen, Weiqiu; Song, Jizhou; Verduzco, Rafael; Yu, Cunjiang

    2018-03-01

    Soft robots outperform the conventional hard robots on significantly enhanced safety, adaptability, and complex motions. The development of fully soft robots, especially fully from smart soft materials to mimic soft animals, is still nascent. In addition, to date, existing soft robots cannot adapt themselves to the surrounding environment, i.e., sensing and adaptive motion or response, like animals. Here, compliant ultrathin sensing and actuating electronics innervated fully soft robots that can sense the environment and perform soft bodied crawling adaptively, mimicking an inchworm, are reported. The soft robots are constructed with actuators of open-mesh shaped ultrathin deformable heaters, sensors of single-crystal Si optoelectronic photodetectors, and thermally responsive artificial muscle of carbon-black-doped liquid-crystal elastomer (LCE-CB) nanocomposite. The results demonstrate that adaptive crawling locomotion can be realized through the conjugation of sensing and actuation, where the sensors sense the environment and actuators respond correspondingly to control the locomotion autonomously through regulating the deformation of LCE-CB bimorphs and the locomotion of the robots. The strategy of innervating soft sensing and actuating electronics with artificial muscles paves the way for the development of smart autonomous soft robots. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  7. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  8. Non preemptive soft real time scheduler: High deadline meeting rate on overload

    Science.gov (United States)

    Khalib, Zahereel Ishwar Abdul; Ahmad, R. Badlishah; El-Shaikh, Mohamed

    2015-05-01

    While preemptive scheduling has gain more attention among researchers, current work in non preemptive scheduling had shown promising result in soft real time jobs scheduling. In this paper we present a non preemptive scheduling algorithm meant for soft real time applications, which is capable of producing better performance during overload while maintaining excellent performance during normal load. The approach taken by this algorithm has shown more promising results compared to other algorithms including its immediate predecessor. We will present the analysis made prior to inception of the algorithm as well as simulation results comparing our algorithm named gutEDF with EDF and gEDF. We are convinced that grouping jobs utilizing pure dynamic parameters would produce better performance.

  9. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    Science.gov (United States)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  10. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    Science.gov (United States)

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of

  11. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates

    Directory of Open Access Journals (Sweden)

    Berhane Yemane

    2008-03-01

    Full Text Available Abstract Background As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs. Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. Methods This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. Results The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. Conclusion The low sensitivity of parameter

  12. Impact of automated dispensing cabinets on medication selection and preparation error rates in an emergency department: a prospective and direct observational before-and-after study.

    Science.gov (United States)

    Fanning, Laura; Jones, Nick; Manias, Elizabeth

    2016-04-01

    The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.

  13. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2010-10-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  14. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  15. Effect of antiseptic irrigation on infection rates of traumatic soft tissue wounds: a longitudinal cohort study.

    Science.gov (United States)

    Roth, B; Neuenschwander, R; Brill, F; Wurmitzer, F; Wegner, C; Assadian, O; Kramer, A

    2017-03-02

    Acute traumatic wounds are contaminated with bacteria and therefore an infection risk. Antiseptic wound irrigation before surgical intervention is routinely performed for contaminated wounds. However, a broad variety of different irrigation solutions are in use. The aim of this retrospective, non-randomised, controlled longitudinal cohort study was to assess the preventive effect of four different irrigation solutions before surgical treatment, on wound infection in traumatic soft tissue wounds. Over a period of three decades, the prophylactic application of wound irrigation was studied in patients with contaminated traumatic wounds requiring surgical treatment, with or without primary wound closure. The main outcome measure was development of wound infection. From 1974-1983, either 0.04 % polihexanide (PHMB), 1 % povidone-iodine (PVP-I), 4 % hydrogen peroxide, or undiluted Ringer's solution were concurrently in use. From 1984-1996, only 0.04 % PHMB or 1 % PVP-I were applied. From 1997, 0.04 % PHMB was used until the end of the study period in 2005. The combined rate for superficial and deep wound infection was 1.7 % in the 0.04 % PHMB group (n=3264), 4.8 % in the 1 % PVP-I group (n=2552), 5.9 % in the Ringer's group (n=645), and 11.7 % in the 4 % hydrogen peroxide group (n=643). Compared with all other treatment arms, PHMB showed the highest efficacy in preventing infection in traumatic soft tissue wounds (p<0.001). However, compared with PVP-I, the difference was only significant for superficial infections. The large patient numbers in this study demonstrated a robust superiority of 0.04 % PHMB to prevent infection in traumatic soft tissue wounds. These retrospective results may further provide important information as the basis for power calculations for the urgently needed prospective clinical trials in the evolving field of wound antisepsis.

  16. Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates

    International Nuclear Information System (INIS)

    Zamanali, J.H.; Hubbard, F.R.; Mosleh, A.; Waller, M.A.

    1992-01-01

    The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition

  17. Who Do Hospital Physicians and Nurses Go to for Advice About Medications? A Social Network Analysis and Examination of Prescribing Error Rates.

    Science.gov (United States)

    Creswick, Nerida; Westbrook, Johanna Irene

    2015-09-01

    To measure the weekly medication advice-seeking networks of hospital staff, to compare patterns across professional groups, and to examine these in the context of prescribing error rates. A social network analysis was conducted. All 101 staff in 2 wards in a large, academic teaching hospital in Sydney, Australia, were surveyed (response rate, 90%) using a detailed social network questionnaire. The extent of weekly medication advice seeking was measured by density of connections, proportion of reciprocal relationships by reciprocity, number of colleagues to whom each person provided advice by in-degree, and perceptions of amount and impact of advice seeking between physicians and nurses. Data on prescribing error rates from the 2 wards were compared. Weekly medication advice-seeking networks were sparse (density: 7% ward A and 12% ward B). Information sharing across professional groups was modest, and rates of reciprocation of advice were low (9% ward A, 14% ward B). Pharmacists provided advice to most people, and junior physicians also played central roles. Senior physicians provided medication advice to few people. Many staff perceived that physicians rarely sought advice from nurses when prescribing, but almost all believed that an increase in communication between physicians and nurses about medications would improve patient safety. The medication networks in ward B had higher measures for density, reciprocation, and fewer senior physicians who were isolates. Ward B had a significantly lower rate of both procedural and clinical prescribing errors than ward A (0.63 clinical prescribing errors per admission [95%CI, 0.47-0.79] versus 1.81/ admission [95%CI, 1.49-2.13]). Medication advice-seeking networks among staff on hospital wards are limited. Hubs of advice provision include pharmacists, junior physicians, and senior nurses. Senior physicians are poorly integrated into medication advice networks. Strategies to improve the advice-giving networks between senior

  18. Two-temperature accretion disks with electron-positron pairs - Effects of Comptonized external soft photons

    Science.gov (United States)

    Kusunose, Masaaki; Takahara, Fumio

    1990-01-01

    The present account of the effects of soft photons from external sources on two-temperature accretion disks in electron-positron pair equilibrium solves the energy-balance equation for a given radial distribution of the input rate of soft photons, taking into account their bremsstrahlung and Comptonization. Critical rate behavior is investigated as a function of the ratio of the energy flux of incident soft photons and the energy-generation rate. As in a previous study, the existence of a critical accretion rate is established.

  19. Soft Robotics Week

    CERN Document Server

    Rossiter, Jonathan; Iida, Fumiya; Cianchetti, Matteo; Margheri, Laura

    2017-01-01

    This book offers a comprehensive, timely snapshot of current research, technologies and applications of soft robotics. The different chapters, written by international experts across multiple fields of soft robotics, cover innovative systems and technologies for soft robot legged locomotion, soft robot manipulation, underwater soft robotics, biomimetic soft robotic platforms, plant-inspired soft robots, flying soft robots, soft robotics in surgery, as well as methods for their modeling and control. Based on the results of the second edition of the Soft Robotics Week, held on April 25 – 30, 2016, in Livorno, Italy, the book reports on the major research lines and novel technologies presented and discussed during the event.

  20. Inclusive bit error rate analysis for coherent optical code-division multiple-access system

    Science.gov (United States)

    Katz, Gilad; Sadot, Dan

    2002-06-01

    Inclusive noise and bit error rate (BER) analysis for optical code-division multiplexing (OCDM) using coherence techniques is presented. The analysis contains crosstalk calculation of the mutual field variance for different number of users. It is shown that the crosstalk noise depends deeply on the receiver integration time, the laser coherence time, and the number of users. In addition, analytical results of the power fluctuation at the received channel due to the data modulation at the rejected channels are presented. The analysis also includes amplified spontaneous emission (ASE)-related noise effects of in-line amplifiers in a long-distance communication link.

  1. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  2. From Soft Sculpture to Soft Robotics: Retracing a Physical Aesthetics of Bio-Morphic Softness

    DEFF Research Database (Denmark)

    Jørgensen, Jonas

    2017-01-01

    Soft robotics has in the past decade emerged as a growing subfield of technical robotics research, distinguishable by its bio-inspired design strategies, interest in morphological computation, and interdisciplinary combination of insights from engineering, computer science, biology and material...... science. Recently, soft robotics technology has also started to make its way into art, design, and architecture. This paper attempts to think an aesthetics of softness and the life-like through an artistic tradition deeply imbricated with an interrogation of softness and its physical substrates, namely...... the soft sculpture that started proliferating in the late 1960s. Critical descriptions of these works, interestingly, frequently emphasize their similarities with living organisms and bodies as a central tenet of their aesthetics. The paper seeks to articulate aspects of a contiguity between softness...

  3. The soft notion of China's 'soft power'

    OpenAIRE

    Breslin, Shaun

    2011-01-01

    · Although debates over Chinese soft power have increased in\\ud recent years, there is no shared definition of what ‘soft power’\\ud actually means. The definition seems to change depending on\\ud what the observer wants to argue.\\ud · External analyses of soft power often include a focus on\\ud economic relations and other material (hard) sources of power\\ud and influence.\\ud · Many Chinese analyses of soft power focus on the promotion of a\\ud preferred (positive) understanding of China’s inter...

  4. Correct mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme on ping-pong protocol

    OpenAIRE

    Zhang, Zhanjun

    2004-01-01

    Comment: The wrong mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme [PRL90(03)157901]on ping-pong protocol have been pointed out and corrected

  5. Assessing the suitability of soft computing approaches for forest fires prediction

    Directory of Open Access Journals (Sweden)

    Samaher Al_Janabi

    2018-07-01

    Full Text Available Forest fires present one of the main causes of environmental hazards that have many negative results in different aspect of life. Therefore, early prediction, fast detection and rapid action are the key elements for controlling such phenomenon and saving lives. Through this work, 517 different entries were selected at different times for montesinho natural park (MNP in Portugal to determine the best predictor that has the ability to detect forest fires, The principle component analysis (PCA was applied to find the critical patterns and particle swarm optimization (PSO technique was used to segment the fire regions (clusters. In the next stage, five soft computing (SC Techniques based on neural network were used in parallel to identify the best technique that would potentially give more accurate and optimum results in predicting of forest fires, these techniques namely; cascade correlation network (CCN, multilayer perceptron neural network (MPNN, polynomial neural network (PNN, radial basis function (RBF and support vector machine (SVM In the final stage, the predictors and their performance were evaluated based on five quality measures including root mean squared error (RMSE, mean squared error (MSE, relative absolute error (RAE, mean absolute error (MAE and information gain (IG. The results indicate that SVM technique was more effective and efficient than the RBF, MPNN, PNN and CCN predictors. The results also show that the SVM algorithm provides more precise predictions compared with other predictors with small estimation error. The obtained results confirm that the SVM improves the prediction accuracy and suitable for forest fires prediction compared to other methods. Keywords: Forest fires, Soft computing, Prediction, Principle component analysis, Particle swarm optimization, Cascade correlation network, Multilayer perceptron neural network, Polynomial neural networks, Radial basis function, Support vector machine

  6. SOFT AND SOFTER HANDOVER PERFORMANCE OF CDMA

    Directory of Open Access Journals (Sweden)

    Lina wati

    2010-12-01

    Full Text Available One of telecommunication providers in Indonesia applies CDMA2000 1x technology. The technology hasmany advantages such as larger channel capacity of BTS (Base Transceiver Station. On the other hand, thecapacity depends on user’s density. Therefore to guarantee voice connection when user or mobile station (MS isalways moving from one cell to others, handover technique is needed. However the technique can be failed for manyreasons. Therefore impact of call attempt on softer and soft handover performance is investigated. Hence the paperexamined soft – softer handover performance of CDMA in BTS (Base Transceiver Station, BSC, and Sectors inboth sub-urban and rural areas in Denpasar, Bali with area code of 0361.The research has been done in rural and suburban area with call area code ‘0361’. The analyses includedregression and simple linear correlation applications. The results showed that number of call attempts affected thefailure of soft and softer handover technique dominantly. Generally, average level of success both handover in bothrural and suburban area were about 99% which are above KPI (Key Performance Indicator reference at 98.50%.However in rural area, other factors such as blocking called attempt and error called number have caused the softerhandover failure.

  7. Finding the right coverage : The impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates

    NARCIS (Netherlands)

    Fountain, Emily D.; Pauli, Jonathan N.; Reid, Brendan N.; Palsboll, Per J.; Peery, M. Zachariah

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown.

  8. Perioperative fractionated high-dose rate brachytherapy for malignant bone and soft tissue tumors

    International Nuclear Information System (INIS)

    Koizumi, Masahiko; Inoue, Takehiro; Yamazaki, Hideya; Teshima, Teruki; Tanaka, Eiichi; Yoshida, Ken; Imai, Atsushi; Shiomi, Hiroya; Kagawa, Kazufumi; Araki, Nobuto; Kuratsu, Shigeyuki; Uchida, Atsumasa; Inoue, Toshihiko

    1999-01-01

    Purpose: To investigate the viability of perioperative fractionated HDR brachytherapy for malignant bone and soft tissue tumors, analyzing the influence of surgical margin. Methods and Materials: From July 1992 through May 1996, 16 lesions of 14 patients with malignant bone and soft tissue tumors (3 liposarcomas, 3 MFHs, 2 malignant schwannomas, 2 chordomas, 1 osteosarcoma, 1 leiomyosarcoma, 1 epithelioid sarcoma, and 1 synovial sarcoma) were treated at the Osaka University Hospital. The patients' ages ranged from 14 to 72 years (median: 39 years). Treatment sites were the pelvis in 6 lesions, the upper limbs in 5, the neck in 4, and a lower limb in 1. The resection margins were classified as intracapsular in 5 lesions, marginal in 5, and wide in 6. Postoperative fractionated HDR brachytherapy was started on the 4th-13th day after surgery (median: 6th day). The total dose was 40-50 Gy/7-10 fr/ 4-7 day (bid) at 5 or 10 mm from the source. Follow-up periods were between 19 and 46 months (median: 30 months). Results: Local control rates were 75% at 1 year and 48% in 2 years, and ultimate local control was achieved in 8 (50%) of 16 lesions. Of the 8 uncontrolled lesions, 5 (63%) had intracapsular (macroscopically positive) resection margins, and all the 8 controlled lesions (100%) had marginal (microscopically positive) or wide (negative) margins. Of the total, 3 patients died of both tumor and metastasis, 3 of metastasis alone, 1 of tumor alone, and 7 showed no evidence of disease. Peripheral nerve palsy was seen in one case after this procedure, but no infection or delayed wound healing caused by tubing or irradiation has occurred. Conclusion: Perioperative fractionated HDR brachytherapy is safe, well tolerated, and applicable to marginal or wide surgical margin cases

  9. Large poroelastic deformation of a soft material

    Science.gov (United States)

    MacMinn, Christopher W.; Dufresne, Eric R.; Wettlaufer, John S.

    2014-11-01

    Flow through a porous material will drive mechanical deformation when the fluid pressure becomes comparable to the stiffness of the solid skeleton. This has applications ranging from hydraulic fracture for recovery of shale gas, where fluid is injected at high pressure, to the mechanics of biological cells and tissues, where the solid skeleton is very soft. The traditional linear theory of poroelasticity captures this fluid-solid coupling by combining Darcy's law with linear elasticity. However, linear elasticity is only volume-conservative to first order in the strain, which can become problematic when damage, plasticity, or extreme softness lead to large deformations. Here, we compare the predictions of linear poroelasticity with those of a large-deformation framework in the context of two model problems. We show that errors in volume conservation are compounded and amplified by coupling with the fluid flow, and can become important even when the deformation is small. We also illustrate these results with a laboratory experiment.

  10. A fast novel soft-start circuit for peak current-mode DC—DC buck converters

    International Nuclear Information System (INIS)

    Li Jie; Yang Miao; Sun Weifeng; Lu Xiaoxia; Xu Shen; Lu Shengli

    2013-01-01

    A fully integrated soft-start circuit for DC—DC buck converters is presented. The proposed high speed soft-start circuit is made of two sections: an overshoot suppression circuit and an inrush current suppression circuit. The overshoot suppression circuit is presented to control the input of the error amplifier to make output voltage limit increase in steps without using an external capacitor. A variable clock signal is adopted in the inrush current suppression circuit to increase the duty cycle of the system and suppress the inrush current. The DC—DC converter with the proposed soft-start circuit has been fabricated with a standard 0.13 μm CMOS process. Experimental results show that the proposed high speed soft-start circuit has achieved less than 50 μs start-up time. The inductor current and the output voltage increase smoothly over the whole load range. (semiconductor integrated circuits)

  11. Soft ideal topological space and mixed fuzzy soft ideal topological space

    Directory of Open Access Journals (Sweden)

    Manash Borah

    2019-01-01

    Full Text Available In this paper we introduce fuzzy soft ideal and mixed fuzzy soft ideal topological spaces and some properties of this space. Also we introduce fuzzy soft $I$-open set, fuzzy soft $\\alpha$-$I$-open set, fuzzy soft pre-$I$-open set, fuzzy soft semi-$I$-open set and fuzzy soft $\\beta$-$I$-open set and discuss some of their properties.

  12. Preoperative Radiotherapy and Wide Resection for Soft Tissue Sarcomas: Achieving a Low Rate of Major Wound Complications with the Use of Flaps. Results of a Single Surgical Team.

    Science.gov (United States)

    Chan, Lester Wai Mon; Imanishi, Jungo; Grinsell, Damien Glen; Choong, Peter

    2017-01-01

    Surgery in combination with radiotherapy (RT) has become the standard of care for most soft tissue sarcomas. The choice between pre- and postoperative RT is controversial. Preoperative RT is associated with a 32-35% rate of major wound complications (MWC) and 16-25% rate of reoperation. The role of vascularized soft tissue "flaps" in reducing complications is unclear. We report the outcomes of patients treated with preoperative RT, resection, and flap reconstruction. 122 treatment episodes involving 117 patients were retrospectively reviewed. All patients were treated with 50.4 Gy of external beam radiation. Surgery was performed at 4-8 weeks after completion of RT by the same combination of orthopedic oncology and plastic reconstructive surgeon. Defects were reconstructed with 64 free and 59 pedicled/local flaps. 30 (25%) patients experienced a MWC and 17 (14%) required further surgery. 20% of complications were exclusively related to the donor site. There was complete or partial loss of three flaps. There was no difference in the rate of MWC or reoperation for complications with respect to age, sex, tumor site, previous unplanned excision, tumor grade, depth, and type of flap. Tumor size ≥8 cm was associated with a higher rate of reoperation (11/44 vs 6/78; P  = 0.008) but the rate of MWC was not significant (16/44 vs 14/78; P  = 0.066). The use of soft tissue flaps is associated with a low rate of MWC and reoperation. Our results suggest that a high rate of flap usage may be required to observe a reduction in complication rates.

  13. Preoperative Radiotherapy and Wide Resection for Soft Tissue Sarcomas: Achieving a Low Rate of Major Wound Complications with the Use of Flaps. Results of a Single Surgical Team

    Directory of Open Access Journals (Sweden)

    Lester Wai Mon Chan

    2018-01-01

    Full Text Available BackgroundSurgery in combination with radiotherapy (RT has become the standard of care for most soft tissue sarcomas. The choice between pre- and postoperative RT is controversial. Preoperative RT is associated with a 32–35% rate of major wound complications (MWC and 16–25% rate of reoperation. The role of vascularized soft tissue “flaps” in reducing complications is unclear. We report the outcomes of patients treated with preoperative RT, resection, and flap reconstruction.Patients and methods122 treatment episodes involving 117 patients were retrospectively reviewed. All patients were treated with 50.4 Gy of external beam radiation. Surgery was performed at 4–8 weeks after completion of RT by the same combination of orthopedic oncology and plastic reconstructive surgeon. Defects were reconstructed with 64 free and 59 pedicled/local flaps.Results30 (25% patients experienced a MWC and 17 (14% required further surgery. 20% of complications were exclusively related to the donor site. There was complete or partial loss of three flaps. There was no difference in the rate of MWC or reoperation for complications with respect to age, sex, tumor site, previous unplanned excision, tumor grade, depth, and type of flap. Tumor size ≥8 cm was associated with a higher rate of reoperation (11/44 vs 6/78; P = 0.008 but the rate of MWC was not significant (16/44 vs 14/78; P = 0.066.ConclusionThe use of soft tissue flaps is associated with a low rate of MWC and reoperation. Our results suggest that a high rate of flap usage may be required to observe a reduction in complication rates.

  14. SoftAR: visually manipulating haptic softness perception in spatial augmented reality.

    Science.gov (United States)

    Punpongsanon, Parinya; Iwai, Daisuke; Sato, Kosuke

    2015-11-01

    We present SoftAR, a novel spatial augmented reality (AR) technique based on a pseudo-haptics mechanism that visually manipulates the sense of softness perceived by a user pushing a soft physical object. Considering the limitations of projection-based approaches that change only the surface appearance of a physical object, we propose two projection visual effects, i.e., surface deformation effect (SDE) and body appearance effect (BAE), on the basis of the observations of humans pushing physical objects. The SDE visualizes a two-dimensional deformation of the object surface with a controlled softness parameter, and BAE changes the color of the pushing hand. Through psychophysical experiments, we confirm that the SDE can manipulate softness perception such that the participant perceives significantly greater softness than the actual softness. Furthermore, fBAE, in which BAE is applied only for the finger area, significantly enhances manipulation of the perception of softness. We create a computational model that estimates perceived softness when SDE+fBAE is applied. We construct a prototype SoftAR system in which two application frameworks are implemented. The softness adjustment allows a user to adjust the softness parameter of a physical object, and the softness transfer allows the user to replace the softness with that of another object.

  15. Influence of Cooling on the Glycolysis Rate and Development of PSE (Pale, Soft, Exudative Meat

    Directory of Open Access Journals (Sweden)

    Mayka Reghiany Pedrão

    2015-04-01

    Full Text Available The aim of this work was to evaluate pH values fall rate in chicken breast meat under commercial refrigeration processing conditions and the development of PSE (pale, soft, exudative meat. Broiler breast samples from the Cobb breed, both genders, at 47 days of age (n = 100 were taken from refrigerated carcasses (RS immersed in water and ice in a tank chilled at 0°C (±2. pH and temperature (T values were recorded at several periods throughout refrigeration in comparison to samples left at room T as control (CS. The ultimate pH (pHu value of 5.86 for RS carcasses were only reached at 11°C after 8.35 h post mortem (PM while, for CS samples, pHu value was 5.94 at 22°C after 4.08 h PM. Thus, under commercial refrigeration conditions, the glycolysis rate was retarded by over 4.0 h PM and the breast meat color was affected. At 24.02 h PM, PSE meat incidence was 30% while for CS, meat remained dark and PSE meat was not detected. Results show retardation in the glycolysis rate and PSE meat development was promoted by the refrigeration treatment when compared with samples stored at processing room temperature.

  16. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  17. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  18. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  19. Exploring Predictability of Instructor Ratings Using a Quantitative Tool for Evaluating Soft Skills among MBA Students

    Science.gov (United States)

    Brill, Robert T.; Gilfoil, David M.; Doll, Kristen

    2014-01-01

    Academic researchers have often touted the growing importance of "soft skills" for modern day business leaders, especially leadership and communication skills. Despite this growing interest and attention, relatively little work has been done to develop and validate tools to assess soft skills. Forty graduate students from nine MBA…

  20. Assessment of residual error for online cone-beam CT-guided treatment of prostate cancer patients

    International Nuclear Information System (INIS)

    Letourneau, Daniel; Martinez, Alvaro A.; Lockman, David; Yan Di; Vargas, Carlos; Ivaldi, Giovanni; Wong, John

    2005-01-01

    Purpose: Kilovoltage cone-beam CT (CBCT) implemented on board a medical accelerator is available for image-guidance applications in our clinic. The objective of this work was to assess the magnitude and stability of the residual setup error associated with CBCT online-guided prostate cancer patient setup. Residual error pertains to the uncertainty in image registration, the limited mechanical accuracy, and the intrafraction motion during imaging and treatment. Methods and Materials: The residual error for CBCT online-guided correction was first determined in a phantom study. After online correction, the phantom residual error was determined by comparing megavoltage portal images acquired every 90 deg. to the corresponding digitally reconstructed radiographs. In the clinical study, 8 prostate cancer patients were implanted with three radiopaque markers made of high-winding coils. After positioning the patient using the skin marks, a CBCT scan was acquired and the setup error determined by fusing the coils on the CBCT and planning CT scans. The patient setup was then corrected by moving the couch accordingly. A second CBCT scan was acquired immediately after the correction to evaluate the residual target setup error. Intrafraction motion was evaluated by tracking the coils and the bony landmarks on kilovoltage radiographs acquired every 30 s between the two CBCT scans. Corrections based on soft-tissue registration were evaluated offline by aligning the prostate contours defined on both planning CT and CBCT images. Results: For ideal rigid phantoms, CBCT image-guided treatment can usually achieve setup accuracy of 1 mm or better. For the patients, after CBCT correction, the target setup error was reduced in almost all cases and was generally within ±1.5 mm. The image guidance process took 23-35 min, dictated by the computer speed and network configuration. The contribution of the intrafraction motion to the residual setup error was small, with a standard deviation of

  1. Bit-error-rate performance analysis of self-heterodyne detected radio-over-fiber links using phase and intensity modulation

    DEFF Research Database (Denmark)

    Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso

    2010-01-01

    We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self...

  2. Soft Neutrosophic Bi-LA-semigroup and Soft Neutrosophic N-LA-seigroup

    Directory of Open Access Journals (Sweden)

    Mumtaz Ali

    2014-09-01

    Full Text Available Soft set theory is a general mathematical tool for dealing with uncertain, fuzzy, not clearly defined objects. In this paper we introduced soft neutrosophic biLA-semigroup,soft neutosophic sub bi-LA-semigroup, soft neutrosophic N -LA-semigroup with the discuission of some of their characteristics. We also introduced a new type of soft neutrophic bi-LAsemigroup, the so called soft strong neutrosophic bi-LAsemigoup which is of pure neutrosophic character. This is also extend to soft neutrosophic strong N-LA-semigroup. We also given some of their properties of this newly born soft structure related to the strong part of neutrosophic theory.

  3. In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function

    Science.gov (United States)

    Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir

    2018-03-01

    We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.

  4. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  5. Bilateral, symmetrical soft tissue calcifications in the face

    International Nuclear Information System (INIS)

    Vazquez, Josue; Rosenthal, Daniel I.

    2010-01-01

    A 50-year-old woman with jaw pain and a history of bisphosphonate use was shown on radiography to have ill-defined soft tissue calcifications overlying the maxilla, mandible, and zygomatic bones bilaterally. The bones were normal. CT revealed similar findings. Although a broad imaging differential diagnosis was initially considered, further questioning of the patient revealed a history of facial injections with a calcium hydroxylapatite product for cosmetic purposes. The appearance of this increasingly popular treatment should be recognized to avoid errors in interpretation. (orig.)

  6. Bilateral, symmetrical soft tissue calcifications in the face

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez, Josue; Rosenthal, Daniel I. [Massachusetts General Hospital, Harvard Medical School, Department of Radiology, Boston, MA (United States)

    2010-04-15

    A 50-year-old woman with jaw pain and a history of bisphosphonate use was shown on radiography to have ill-defined soft tissue calcifications overlying the maxilla, mandible, and zygomatic bones bilaterally. The bones were normal. CT revealed similar findings. Although a broad imaging differential diagnosis was initially considered, further questioning of the patient revealed a history of facial injections with a calcium hydroxylapatite product for cosmetic purposes. The appearance of this increasingly popular treatment should be recognized to avoid errors in interpretation. (orig.)

  7. A fast iterative soft-thresholding algorithm for few-view CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng; Mou, Xuanqin; Zhang, Yanbo [Jiaotong Univ., Xi' an (China). Inst. of Image Processing and Pattern Recognition

    2011-07-01

    Iterative soft-thresholding algorithms with total variation regularization can produce high-quality reconstructions from few views and even in the presence of noise. However, these algorithms are known to converge quite slowly, with a proven theoretically global convergence rate O(1/k), where k is iteration number. In this paper, we present a fast iterative soft-thresholding algorithm for few-view fan beam CT reconstruction with a global convergence rate O(1/k{sup 2}), which is significantly faster than the iterative soft-thresholding algorithm. Simulation results demonstrate the superior performance of the proposed algorithm in terms of convergence speed and reconstruction quality. (orig.)

  8. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  9. A model for the anisotropic response of fibrous soft tissues using six discrete fibre bundles

    KAUST Repository

    Flynn, Cormac

    2011-06-30

    The development of constitutive models of fibrous soft-tissues is a challenging problem. Many consider the tissue to be a collection of fibres with a continuous distribution function representing their orientations. A discrete fibre model is presented consisting of six weighted fibre-bundles. Each bundle is oriented such that it passes through opposing vertices of a regular icosahedron. A novel aspect is the use of simple analytical distribution functions to simulate undulated collagen fibres. This approach yields closed-form analytical expressions for the strain energy of the collagen fibre-bundle that avoids the sometimes costly numerical integration of some statistical distribution functions. The elastin fibres are characterized by a modified neo-Hookean type strain energy function which does not allow for fibre compression. The model accurately simulates biaxial stretching of rabbit-skin (error-of-fit 8.7), uniaxial stretching of pig-skin (error-of-fit 7.6), equibiaxial loading of aortic valve cusp (error-of-fit 0.8), and simple shear of rat septal myocardium (error-of-fit 8.9). It compares favourably with previous soft-tissue models and alternative methods of representing undulated collagen fibres. Predicted collagen fibre stiffnesses range from 8.0thinspaceMPa to 930MPa. Elastin fibre stiffnesses range from 2.0 kPa to 154.4 kPa. © 2011 John Wiley & Sons, Ltd.

  10. A model for the anisotropic response of fibrous soft tissues using six discrete fibre bundles

    KAUST Repository

    Flynn, Cormac; Rubin, M. B.; Nielsen, Poul

    2011-01-01

    The development of constitutive models of fibrous soft-tissues is a challenging problem. Many consider the tissue to be a collection of fibres with a continuous distribution function representing their orientations. A discrete fibre model is presented consisting of six weighted fibre-bundles. Each bundle is oriented such that it passes through opposing vertices of a regular icosahedron. A novel aspect is the use of simple analytical distribution functions to simulate undulated collagen fibres. This approach yields closed-form analytical expressions for the strain energy of the collagen fibre-bundle that avoids the sometimes costly numerical integration of some statistical distribution functions. The elastin fibres are characterized by a modified neo-Hookean type strain energy function which does not allow for fibre compression. The model accurately simulates biaxial stretching of rabbit-skin (error-of-fit 8.7), uniaxial stretching of pig-skin (error-of-fit 7.6), equibiaxial loading of aortic valve cusp (error-of-fit 0.8), and simple shear of rat septal myocardium (error-of-fit 8.9). It compares favourably with previous soft-tissue models and alternative methods of representing undulated collagen fibres. Predicted collagen fibre stiffnesses range from 8.0thinspaceMPa to 930MPa. Elastin fibre stiffnesses range from 2.0 kPa to 154.4 kPa. © 2011 John Wiley & Sons, Ltd.

  11. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  12. The probability and the management of human error

    International Nuclear Information System (INIS)

    Dufey, R.B.; Saull, J.W.

    2004-01-01

    Embedded within modern technological systems, human error is the largest, and indeed dominant contributor to accident cause. The consequences dominate the risk profiles for nuclear power and for many other technologies. We need to quantify the probability of human error for the system as an integral contribution within the overall system failure, as it is generally not separable or predictable for actual events. We also need to provide a means to manage and effectively reduce the failure (error) rate. The fact that humans learn from their mistakes allows a new determination of the dynamic probability and human failure (error) rate in technological systems. The result is consistent with and derived from the available world data for modern technological systems. Comparisons are made to actual data from large technological systems and recent catastrophes. Best estimate values and relationships can be derived for both the human error rate, and for the probability. We describe the potential for new approaches to the management of human error and safety indicators, based on the principles of error state exclusion and of the systematic effect of learning. A new equation is given for the probability of human error (λ) that combines the influences of early inexperience, learning from experience (ε) and stochastic occurrences with having a finite minimum rate, this equation is λ 5.10 -5 + ((1/ε) - 5.10 -5 ) exp(-3*ε). The future failure rate is entirely determined by the experience: thus the past defines the future

  13. The dynamic effect of exchange-rate volatility on Turkish exports: Parsimonious error-correction model approach

    Directory of Open Access Journals (Sweden)

    Demirhan Erdal

    2015-01-01

    Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.

  14. Least reliable bits coding (LRBC) for high data rate satellite communications

    Science.gov (United States)

    Vanderaar, Mark; Budinger, James; Wagner, Paul

    1992-01-01

    LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  15. Throughput Estimation Method in Burst ACK Scheme for Optimizing Frame Size and Burst Frame Number Appropriate to SNR-Related Error Rate

    Science.gov (United States)

    Ohteru, Shoko; Kishine, Keiji

    The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.

  16. Soft, embodied, situated & connected: enriching interactions with soft wearbles

    NARCIS (Netherlands)

    Tomico Plasencia, O.; Wilde, D.

    2016-01-01

    Soft wearables include clothing and textile-based accessories that incorporate smart textiles and soft electronic interfaces to enable responsive and interactive experiences. When designed well, soft wearables leverage the cultural, sociological and material qualities of textiles, fashion and dress;

  17. Soft electronics for soft robotics

    Science.gov (United States)

    Kramer, Rebecca K.

    2015-05-01

    As advanced as modern machines are, the building blocks have changed little since the industrial revolution, leading to rigid, bulky, and complex devices. Future machines will include electromechanical systems that are soft and elastically deformable, lending them to applications such as soft robotics, wearable/implantable devices, sensory skins, and energy storage and transport systems. One key step toward the realization of soft systems is the development of stretchable electronics that remain functional even when subject to high strains. Liquid-metal traces embedded in elastic polymers present a unique opportunity to retain the function of rigid metal conductors while leveraging the deformable properties of liquid-elastomer composites. However, in order to achieve the potential benefits of liquid-metal, scalable processing and manufacturing methods must be identified.

  18. Dynamics and Rheology of Soft Colloidal Glasses

    KAUST Repository

    Wen, Yu Ho

    2015-01-20

    © 2015 American Chemical Society. The linear viscoelastic (LVE) spectrum of a soft colloidal glass is accessed with the aid of a time-concentration superposition (TCS) principle, which unveils the glassy particle dynamics from in-cage rattling motion to out-of-cage relaxations over a broad frequency range 10-13 rad/s < ω < 101 rad/s. Progressive dilution of a suspension of hairy nanoparticles leading to increased intercenter distances is demonstrated to enable continuous mapping of the structural relaxation for colloidal glasses. In contrast to existing empirical approaches proposed to extend the rheological map of soft glassy materials, i.e., time-strain superposition (TSS) and strain-rate frequency superposition (SRFS), TCS yields a LVE master curve that satis fies the Kramers-Kronig relations which interrelate the dynamic moduli for materials at equilibrium. The soft glassy rheology (SGR) model and literature data further support the general validity of the TCS concept for soft glassy materials.

  19. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    Science.gov (United States)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  20. Necrotizing Soft Tissue Infection

    Directory of Open Access Journals (Sweden)

    Sahil Aggarwal, BS

    2018-04-01

    Full Text Available History of present illness: A 71-year-old woman with a history of metastatic ovarian cancer presented with sudden onset, rapidly progressing painful rash in the genital region and lower abdominal wall. She was febrile to 103°F, heart rate was 114 beats per minute, and respiratory rate was 24 per minute. Her exam was notable for a toxic-appearing female with extensive areas of erythema, tenderness, and induration to her lower abdomen, intertriginous areas, and perineum with intermittent segments of crepitus without hemorrhagic bullae or skin breakdown. Significant findings: Computed tomography (CT of the abdominal and pelvis with intravenous (IV contrast revealed inflammatory changes, including gas and fluid collections within the ventral abdominal wall extending to the vulva, consistent with a necrotizing soft tissue infection. Discussion: Necrotizing fasciitis is a serious infection of the skin and soft tissues that requires an early diagnosis to reduce morbidity and mortality. Classified into several subtypes based on the type of microbial infection, necrotizing fasciitis can rapidly progress to septic shock or death if left untreated.1 Diagnosing necrotizing fasciitis requires a high index of suspicion based on patient risk factors, presentation, and exam findings. Definitive treatment involves prompt surgical exploration and debridement coupled with IV antibiotics.2,3 Clinical characteristics such as swelling, disproportionate pain, erythema, crepitus, and necrotic tissue should be a guide to further diagnostic tests.4 Unfortunately, lab values such as white blood cell count and lactate imaging studies have high sensitivity but low specificity, making the diagnosis of necrotizing fasciitis still largely a clinical one.4,5 CT is a reliable method to exclude the diagnosis of necrotizing soft tissue infections (sensitivity of 100%, but is only moderately reliable in correctly identifying such infections (specificity of 81%.5 Given the emergent

  1. Minimizing the symbol-error-rate for amplify-and-forward relaying systems using evolutionary algorithms

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2015-02-01

    In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.

  2. Surface Slope Metrology on Deformable Soft X-ray Mirrors

    International Nuclear Information System (INIS)

    Yuan, Sheng; Yashchuk, Valeriy V.; Goldberg, Kenneth A.; Celestre, Rich; Church, Matthew; McKinney, Wayne R.; Morrison, Greg; Warwick, Tony

    2010-01-01

    We report on the current state of surface slope metrology on deformable mirrors for soft x-rays at the Advanced Light Source (ALS). While we are developing techniques for in situ at-wavelength tuning, we are refining methods of ex situ visible-light optical metrology to achieve sub-100-nrad accuracy. This paper reports on laboratory studies, measurements and tuning of a deformable test-KB mirror prior to its use. The test mirror was bent to a much different optical configuration than its original design, achieving a 0.38 micro-radian residual slope error. Modeling shows that in some cases, by including the image conjugate distance as an additional free parameter in the alignment, along with the two force couples, fourth-order tangential shape errors (the so-called bird shape) can be reduced or eliminated.

  3. Surface Slope Metrology on Deformable Soft X-ray Mirrors

    International Nuclear Information System (INIS)

    Yuan, S.; Yashchuk, V.V.; Goldberg, K.A.; Celestre, R.; Church, M.; McKinney, W.R.; Morrison, G.; Warwick, T.

    2009-01-01

    We report on the current state of surface slope metrology on deformable mirrors for soft x-rays at the Advanced Light Source (ALS). While we are developing techniques for in situ at-wavelength tuning, we are refining methods of ex situvisible-light optical metrology to achieve sub-100-nrad accuracy. This paper reports on laboratory studies, measurements and tuning of a deformable test-KB mirror prior to its use. The test mirror was bent to a much different optical configuration than its original design, achieving a 0.38 micro-radian residual slope error. Modeling shows that in some cases, by including the image conjugate distance as an additional free parameter in the alignment, along with the two force couples, fourth-order tangential shape errors (the so-called bird shape) can be reduced or eliminated.

  4. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  5. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  6. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami- Fading Channels

    Directory of Open Access Journals (Sweden)

    Li Zexian

    2004-01-01

    Full Text Available Multicarrier code division multiple access (MC-CDMA is a promising technique that combines orthogonal frequency division multiplexing (OFDM with CDMA. In this paper, based on an alternative expression for the -function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER of multiuser MC-CDMA systems in frequency-selective Nakagami- fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC or equal gain combining (EGC. The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.

  7. Soft leptogenesis

    International Nuclear Information System (INIS)

    D'Ambrosio, Giancarlo; Giudice, Gian F.; Raidal, Martti

    2003-01-01

    We study 'soft leptogenesis', a new mechanism of leptogenesis which does not require flavour mixing among the right-handed neutrinos. Supersymmetry soft-breaking terms give a small mass splitting between the CP-even and CP-odd right-handed sneutrino states of a single generation and provide a CP-violating phase sufficient to generate a lepton asymmetry. The mechanism is successful if the lepton-violating soft bilinear coupling is unconventionally (but not unnaturally) small. The values of the right-handed neutrino masses predicted by soft leptogenesis can be low enough to evade the cosmological gravitino problem

  8. Study of the Dependence of Direct Soft Photon Production on the Jet Characteristics in Hadronic $Z^0$ Decays

    CERN Document Server

    Abdallah, J; Adam, W; Adzic, P; Albrecht, T; Alemany-Fernandez, R; Allmendinger, T; Allport, P P; Amaldi, U; Amapane, N; Amato, S; Anashkin, E; Andreazza, A; Andringa, S; Anjos, N; Antilogus, P; Apel, W-D; Arnoud, Y; Ask, S; Asman, B; Augustin, J E; Augustinus, A; Baillon, P; Ballestrero, A; Bambade, P; Barbier, R; Bardin, D; Barker, G J; Baroncelli, A; Battaglia, M; Baubillier, M; Becks, K-H; Begalli, M; Behrmann, A; Ben-Haim, E; Benekos, N; Benvenuti, A; Berat, C; Berggren, M; Bertrand, D; Besancon, M; Besson, N; Bloch, D; Blom, M; Bluj, M; Bonesini, M; Boonekamp, M; Booth, P S L; Borisov, G; Botner, O; Bouquet, B; Bowcock, T J V; Boyko, I; Bracko, M; Brenner, R; Brodet, E; Bruckman, P; Brunet, J M; Buschbeck, B; Buschmann, P; Calvi, M; Camporesi, T; Canale, V; Carena, F; Castro, N; Cavallo, F; Chapkin, M; Charpentier, Ph; Checchia, P; Chierici, R; Chliapnikov, P; Chudoba, J; Chung, S U; Cieslik, K; Collins, P; Contri, R; Cosme, G; Cossutti, F; Costa, M J; Crennell, D; Cuevas, J; D'Hondt, J; da Silva, T; Da Silva, W; Della Ricca, G; De Angelis, A; De Boer, W; De Clercq, C; De Lotto, B; De Maria, N; De Min, A; de Paula, L; Di Ciaccio, L; Di Simone, A; Doroba, K; Drees, J; Eigen, G; Ekelof, T; Ellert, M; Elsing, M; Espirito Santo, M C; Fanourakis, G; Fassouliotis, D; Feindt, M; Fernandez, J; Ferrer, A; Ferro, F; Flagmeyer, U; Foeth, H; Fokitis, E; Fulda-Quenzer, F; Fuster, J; Gandelman, M; Garcia, C; Gavillet, Ph; Gazis, E; Gokieli, R; Golob, B; Gomez-Ceballos, G; Goncalves, P; Graziani, E; Grosdidier, G; Grzelak, K; Guy, J; Haag, C; Hallgren, A; Hamacher, K; Hamilton, K; Haug, S; Hauler, F; Hedberg, V; Hennecke, M; Hoffman, J; Holmgren, S-O; Holt, P J; Houlden, M A; Jackson, J N; Jarlskog, G; Jarry, P; Jeans, D; Johansson, E K; Jonsson, P; Joram, C; Jungermann, L; Kapusta, F; Katsanevas, S; Katsoufis, E; Kernel, G; Kersevan, B P; Kerzel, U; King, B T; Kjaer, N J; Kluit, P; Kokkinias, P; Kourkoumelis, C; Kouznetsov, O; Krumstein, Z; Kucharczyk, M; Lamsa, J; Leder, G; Ledroit, F; Leinonen, L; Leitner, R; Lemonne, J; Lepeltier, V; Lesiak, T; Liebig, W; Liko, D; Lipniacka, A; Lopes, J H; Lopez, J M; Loukas, D; Lutz, P; Lyons, L; MacNaughton, J; Malek, A; Maltezos, S; Mandl, F; Marco, J; Marco, R; Marechal, B; Margoni, M; Marin, J-C; Mariotti, C; Markou, A; Martinez-Rivero, C; Masik, J; Mastroyiannopoulos, N; Matorras, F; Matteuzzi, C; Mazzucato, F; Mazzucato, M; Mc Nulty, R; Meroni, C; Migliore, E; Mitaroff, W; Mjoernmark, U; Moa, T; Moch, M; Moenig, K; Monge, R; Montenegro, J; Moraes, D; Moreno, S; Morettini, P; Mueller, U; Muenich, K; Mulders, M; Mundim, L; Murray, W; Muryn, B; Myatt, G; Myklebust, T; Nassiakou, M; Navarria, F; Nawrocki, K; Nemecek, S; Nicolaidou, R; Nikolenko, M; Oblakowska-Mucha, A; Obraztsov, V; Olshevski, A; Onofre, A; Orava, R; Österberg, K; Ouraou, A; Oyanguren, A; Paganoni, M; Paiano, S; Palacios, J P; Palka, H; Papadopoulou, Th D; Pape, L; Parkes, C; Parodi, F; Parzefall, U; Passeri, A; Passon, O; Peralta, L; Perepelitsa, V; Perrotta, A; Petrolini, A; Piedra, J; Pieri, L; Pierre, F; Pimenta, M; Piotto, E; Podobnik, T; Poireau, V; Pol, M E; Polok, G; Pozdniakov, V; Pukhaeva, N; Pullia, A; Radojicic, D; Rebecchi, P; Rehn, J; Reid, D; Reinhardt, R; Renton, P; Richard, F; Ridky, J; Rivero, M; Rodriguez, D; Romero, A; Ronchese, P; Roudeau, P; Rovelli, T; Ruhlmann-Kleider, V; Ryabtchikov, D; Sadovsky, A; Salmi, L; Salt, J; Sander, C; Savoy-Navarro, A; Schwickerath, U; Sekulin, R; Siebel, M; Sisakian, A; Smadja, G; Smirnova, O; Sokolov, A; Sopczak, A; Sosnowski, R; Spassov, T; Stanitzki, M; Stocchi, A; Strauss, J; Stugu, B; Szczekowski, M; Szeptycka, M; Szumlak, T; Tabarelli, T; Tegenfeldt, F; Timmermans, J; Tkatchev, L; Tobin, M; Todorovova, S; Tome, B; Tonazzo, A; Tortosa, P; Travnicek, P; Treille, D; Tristram, G; Trochimczuk, M; Troncon, C; Turluer, M-L; Tyapkin, I A; Tyapkin, P; Tzamarias, S; Uvarov, V; Valenti, G; Van Dam, P; Van Eldik, J; van Remortel, N; Van Vulpen, I; Vegni, G; Veloso, F; Venus, W; Verdier, P; Verzi, V; Vilanova, D; Vitale, L; Vrba, V; Wahlen, H; Washbrook, A J; Weiser, C; Wicke, D; Wickens, J; Wilkinson, G; Winter, M; Witek, M; Yushchenko, O; Zalewska, A; Zalewski, P; Zavrtanik, D; Zhuravlov, V; Zimin, N I; Zintchenko, A; Zupan, M

    2010-01-01

    An analysis of the direct soft photon production rate as a function of the parent jet characteristics is presented, based on hadronic events collected by the DELPHI experiment at LEP1. The dependences of the photon rates on the jet kinematic characteristics (momentum, mass, etc.) and on the jet charged, neutral and total hadron multiplicities are reported. Up to a scale factor of about four, which characterizes the overall value of the soft photon excess, a similarity of the observed soft photon behaviour to that of the inner hadronic bremsstrahlung predictions is found for the momentum, mass, and jet charged multiplicity dependences. However for the dependence of the soft photon rate on the jet neutral and total hadron multiplicities a prominent difference is found for the observed soft photon signal as compared to the expected bremsstrahlung from final state hadrons. The observed linear increase of the soft photon production rate with the jet total hadron multiplicity and its strong dependence on the jet ne...

  9. Soft Robotics.

    Science.gov (United States)

    Whitesides, George M

    2018-04-09

    This description of "soft robotics" is not intended to be a conventional review, in the sense of a comprehensive technical summary of a developing field. Rather, its objective is to describe soft robotics as a new field-one that offers opportunities to chemists and materials scientists who like to make "things" and to work with macroscopic objects that move and exert force. It will give one (personal) view of what soft actuators and robots are, and how this class of soft devices fits into the more highly developed field of conventional "hard" robotics. It will also suggest how and why soft robotics is more than simply a minor technical "tweak" on hard robotics and propose a unique role for chemistry, and materials science, in this field. Soft robotics is, at its core, intellectually and technologically different from hard robotics, both because it has different objectives and uses and because it relies on the properties of materials to assume many of the roles played by sensors, actuators, and controllers in hard robotics. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. MEASUREMENTS OF THE SOFT GAMMA-RAY EMISSION FROM SN2014J WITH SUZAKU

    Energy Technology Data Exchange (ETDEWEB)

    Terada, Y. [Graduate School of Science and Engineering, Saitama University, 255 Shimo-Ohkubo, Sakura, Saitama 338-8570 (Japan); Maeda, K.; Ueda, Y.; Enoto, T. [Department of Astronomy, Kyoto University, Kitashirakawa-Oiwake-cho, Sakyo-ku, Kyoto 606-8502 (Japan); Fukazawa, Y. [Department of Physical Science, Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Bamba, A. [Department of Physics and Mathematics, Aoyama Gakuin University, 5-10-1 Fuchinobe Chuo-ku, Sagamihara, Kanagawa 252-5258 (Japan); Katsuda, S. [Department of Physics, Faculty of Science and Engineering, Chuo University, 1-13-27 Kasuga, Bunkyo, Tokyo 112-8551 (Japan); Takahashi, T. [Institute of Space and Astronautical Science, Japan Aerospace eXploration Agency, 3-1-1 Yoshinodai, Sagamihara, Kanagawa 252-5210 (Japan); Tamagawa, T. [RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Röpke, F. K. [Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, Philosophenweg 12, D-69120 Heidelberg (Germany); Summa, A. [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching (Germany); Diehl, R., E-mail: terada@phy.saitama-u.ac.jp [Max-Planck-Institut für extraterrestrische Physik, D-85741, Garching (Germany)

    2016-05-20

    The hard X-ray detector (HXD) on board Suzaku measured soft γ -rays from the SN Ia SN2014J at 77 ± 2 days after the explosion. Although the confidence level of the signal is about 90% (i.e., 2 σ ), the 3 σ upper limit has been derived at <2.2 × 10{sup −4} ph s{sup −1} cm{sup −2} in the 170–250 keV band as the first independent measurement of soft γ -rays with an instrument other than INTEGRAL . For this analysis, we have examined the reproducibility of the NXB model of HXD/GSO using blank sky data. We find that the residual count rate in the 90–500 keV band is distributed around an average of 0.19% with a standard deviation of 0.42% relative to the NXB rate. The averaged residual signals are consistent with that expected from the cosmic X-ray background. The flux of SN2014J derived from Suzaku measurements taken in one snapshot at t = 77 ± 2 days after the explosion is consistent with the INTEGRAL values averaged over the period between t = 50 and 100 days and also with explosion models of single or double degenerate scenarios. Being sensitive to the total ejecta mass surrounding the radioactive material, the ratio between continuum and line flux in the soft gamma-ray regime might distinguish different progenitor models. The Suzaku data have been examined with this relation at t = 77 ± 2 days, but could not distinguish models between single and double degenerate-progenitors. We disfavor explosion models with larger {sup 56}Ni masses than 1 M {sub ⊙}, from our 1 σ error on the 170–250 keV X-ray flux of (1.2 ± 0.7) × 10{sup −4} ph s{sup −1} cm{sup −2}.

  11. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob; Uysal, Murat; Tsiftsis, Theodoros A.

    2014-01-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  12. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-06-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  13. Postoperative radiation boost does not improve local recurrence rates in extremity soft tissue sarcomas

    International Nuclear Information System (INIS)

    Alamanda, Vignesh K.; Schwartz, Herbert S.; Holt, Ginger E.; Song, Yanna; Shinohara, Eric

    2014-01-01

    The standard of care for extremity soft tissue sarcomas continues to be negative-margin limb salvage surgery. Radiotherapy is frequently used as an adjunct to decrease local recurrence. No differences in survival have been found between preoperative and postoperative radiotherapy regimens. However, it is uncertain if the use of a postoperative boost in addition to preoperative radiotherapy reduces local recurrence rates. This retrospective review evaluated patients who received preoperative radiotherapy (n = 49) and patients who received preoperative radiotherapy with a postoperative boost (n=45). The primary endpoint analysed was local recurrence, with distant metastasis and death due to sarcoma analysed as secondary endpoints. Wilcoxon rank-sum test and either χ 2 or Fisher's exact test were used to compare variables. Multivariable regression analyses were used to take into account potential confounders and identify variables that affected outcomes. No differences in the proportion or rate of local recurrence, distant metastasis or death due to sarcoma were observed between the two groups (P>0.05). The two groups were similarly matched with respect to demographics such as age, race and sex and tumour characteristics including excision status, tumour site, size, depth, grade, American Joint Committee on Cancer stage, chemotherapy receipt and histological subtype (P>0.05). The postoperative boost group had a larger proportion of patients with positive microscopic margins (62% vs 10%; P<0.001). No differences in rates of local recurrence, distant metastasis or death due to sarcoma were found in patients who received both pre- and postoperative radiotherapy when compared with those who received only preoperative radiotherapy.

  14. Prevalence of bone and soft tissue tumors.

    Science.gov (United States)

    Yücetürk, Güven; Sabah, Dündar; Keçeci, Burçin; Kara, Ahmet Duran; Yalçinkaya, Selçuk

    2011-01-01

    Multidisciplinary approach is a necessity for the appropriate diagnosis and treatment of bone and soft tissue tumors. The Ege University Musculoskeletal Tumor Council offers consultation services to other hospitals in the Aegean region. Since 1988 the Council has met weekly and spent approximately 1,500 hours evaluating almost 6,000 patients with suspected skeletal system tumors. Our objective was to present the data obtained from this patient group. A total of 5,658 patients, suspected to have a musculoskeletal tumor, were evaluated retrospectively. Multiple records of the patients due to multiple attendance to the Council were excluded. The prevalance of the bone and soft tissue tumors in these patients were analysed. Malignant mesenchymal tumors accounted for 39.7% of the total patients, benign tumors for 17%, tumor-like lesions for 17.8% and metastatic carsinomas for 8.6%. Malignant bone tumors were 50.2% and malignant soft tissue tumors were 49.8% of all the sarcomas. Among the malignant bone tumors the most common was osteosarcomas at a rate of 33.6%, followed by Ewing-PNET at 25.5%, chondrosarcomas at 19.4% and haematopoietic tumors at 17.6%. Pleomorphic sarcomas (24.5%), liposarcoma (16.4%), synovial sarcoma (13%) and undifferential sarcomas (8.8%) were the most common types of malignant sof tissue tumors. Benign soft tissue tumors (48%), benign cartilage tumors (28%), giant cell tumor (15%) and osteogenic tumors (9%) were found among the benign tumors. Hemangioma, lipoma, agressive fibromatosis, enchondroma, solitary chondroma and osteoid osteoma were the most common tumors in their groups. Lung (27%), breast (24%), gastrointestinal system (10.5%) and kidney (8.2%) carcinomas were the most common primary sites of the bone metastasis. Turkey still lacks a comprehensive series indicating the incidence and diagnostic distribution of bone and soft tissue tumors. The presented data would add to our knowledge on the specific rates of the bone and soft tissue

  15. Improved read disturb and write error rates in voltage-control spintronics memory (VoCSM) by controlling energy barrier height

    Science.gov (United States)

    Inokuchi, T.; Yoda, H.; Kato, Y.; Shimizu, M.; Shirotori, S.; Shimomura, N.; Koi, K.; Kamiguchi, Y.; Sugiyama, H.; Oikawa, S.; Ikegami, K.; Ishikawa, M.; Altansargai, B.; Tiwari, A.; Ohsawa, Y.; Saito, Y.; Kurobe, A.

    2017-06-01

    A hybrid writing scheme that combines the spin Hall effect and voltage-controlled magnetic-anisotropy effect is investigated in Ta/CoFeB/MgO/CoFeB/Ru/CoFe/IrMn junctions. The write current and control voltage are applied to Ta and CoFeB/MgO/CoFeB junctions, respectively. The critical current density required for switching the magnetization in CoFeB was modulated 3.6-fold by changing the control voltage from -1.0 V to +1.0 V. This modulation of the write current density is explained by the change in the surface anisotropy of the free layer from 1.7 mJ/m2 to 1.6 mJ/m2, which is caused by the electric field applied to the junction. The read disturb rate and write error rate, which are important performance parameters for memory applications, are drastically improved, and no error was detected in 5 × 108 cycles by controlling read and write sequences.

  16. Long-term storage method for soft X-ray irradiated 'Hyuganatsu' pollen

    International Nuclear Information System (INIS)

    Yano, S.; Tanaka, M.; Ohara, N.

    2008-01-01

    The long-term storage conditions for 'Hyuganatsu ' pollen that had been irradiated with soft X-rays was examined. This study, was aimed at production of 'Tosa-buntan' without formation of nuclear fruit. 1. We evaluated the germination rate of pollen that had been irradiated with soft X-ray (500 or 1,000 Gy) and stored at 3 deg C, -20 deg C, and -40 deg C. The germination rate was the same as that of unirradiated pollen, even after storage for 1 year. Soft X-ray irradiation did not influence the storage attributes of pollen. 2. In unirradiated pollen and pollen that had been irradiated with soft X-ray (500 or 1,000 Gy), temperature conditions necessary for storing from 3 months to 1 year were -20 deg C or less, and pollen stored at -40 deg C had a higher germination rate after 1 year. 3. The germination rate was 1% or less in 4 months if silica gel was sealed into a gas barrier bag with 1,000 Gy-irradiated pollen at a rate of 10:1 (w/w). The ability to germinate was completely lost after 1 year in these conditions. 4. We evaluated the effect of sealing methods on 1,000 Gy-irradiated pollen stored at -20 deg C. There was no difference in germination rates among pollen stored in gas-barrier bags, vacuum-packaged pollen, and pollen stored with nitrogen in gas-barrier bags. Moreover, the germination rate of 750 Gy-irradiated pollen stored at -20 deg C decreased from 3 months onwards when pollen was stored with a free-oxygen absorber (Ageless ZP). 5. Pollen that was treated with acetone before or after soft X-ray irradiation (750 Gy) withstood long-term storage of 1 year. Long-term storage was possible if pollen was stored at -20 deg C, as is the case for rough pollen

  17. Portrayals of branded soft drinks in popular American movies: a content analysis.

    Science.gov (United States)

    Cassady, Diana; Townsend, Marilyn; Bell, Robert A; Watnik, Mitchell

    2006-03-09

    This study examines the portrayals of soft drinks in popular American movies as a potential vehicle for global marketing and an indicator of covert product placement. We conducted a content analysis of America's top-ten grossing films from 1991 through 2000 that included portrayals of beverages (95 movies total). Coding reliabilities were assessed with Cohen's kappa, and exceeded 0.80. If there was at least one instance of branding for a beverage, the film was considered having branded beverages. Fisher's exact test was used to determine if soft drink portrayals were related to audience rating or genre. Data on the amount of time soft drinks appeared onscreen was log transformed to satisfy the assumption of normality, and analyzed using a repeated measures ANOVA model. McNemar's test of agreement was used to test whether branded soft drinks are as likely to appear or to be actor-endorsed compared to other branded beverages. Rating was not associated with portrayals of branded soft drinks, but comedies were most likely to include a branded soft drink (p = 0.0136). Branded soft drinks appeared more commonly than other branded non-alcoholic beverages (p = 0.0001), branded beer (p = 0.0004), and other branded alcoholic beverages (p = 0.0006). Actors consumed branded soft drinks in five times the number of movies compared to their consumption of other branded non-alcoholic beverages (p = 0.0126). About half the revenue from the films with portrayals of branded soft drinks come from film sales outside the U.S. The frequent appearance of branded soft drinks provides indirect evidence that product placement is a common practice for American-produced films shown in the U.S. and other countries.

  18. Compensating additional optical power in the central zone of a multifocal contact lens forminimization of the shrinkage error of the shell mold in the injection molding process.

    Science.gov (United States)

    Vu, Lien T; Chen, Chao-Chang A; Lee, Chia-Cheng; Yu, Chia-Wei

    2018-04-20

    This study aims to develop a compensating method to minimize the shrinkage error of the shell mold (SM) in the injection molding (IM) process to obtain uniform optical power in the central optical zone of soft axial symmetric multifocal contact lenses (CL). The Z-shrinkage error along the Z axis or axial axis of the anterior SM corresponding to the anterior surface of a dry contact lens in the IM process can be minimized by optimizing IM process parameters and then by compensating for additional (Add) powers in the central zone of the original lens design. First, the shrinkage error is minimized by optimizing three levels of four IM parameters, including mold temperature, injection velocity, packing pressure, and cooling time in 18 IM simulations based on an orthogonal array L 18 (2 1 ×3 4 ). Then, based on the Z-shrinkage error from IM simulation, three new contact lens designs are obtained by increasing the Add power in the central zone of the original multifocal CL design to compensate for the optical power errors. Results obtained from IM process simulations and the optical simulations show that the new CL design with 0.1 D increasing in Add power has the closest shrinkage profile to the original anterior SM profile with percentage of reduction in absolute Z-shrinkage error of 55% and more uniform power in the central zone than in the other two cases. Moreover, actual experiments of IM of SM for casting soft multifocal CLs have been performed. The final product of wet CLs has been completed for the original design and the new design. Results of the optical performance have verified the improvement of the compensated design of CLs. The feasibility of this compensating method has been proven based on the measurement results of the produced soft multifocal CLs of the new design. Results of this study can be further applied to predict or compensate for the total optical power errors of the soft multifocal CLs.

  19. Error in the delivery of radiation therapy: Results of a quality assurance review

    International Nuclear Information System (INIS)

    Huang, Grace; Medlam, Gaylene; Lee, Justin; Billingsley, Susan; Bissonnette, Jean-Pierre; Ringash, Jolie; Kane, Gabrielle; Hodgson, David C.

    2005-01-01

    Purpose: To examine error rates in the delivery of radiation therapy (RT), technical factors associated with RT errors, and the influence of a quality improvement intervention on the RT error rate. Methods and materials: We undertook a review of all RT errors that occurred at the Princess Margaret Hospital (Toronto) from January 1, 1997, to December 31, 2002. Errors were identified according to incident report forms that were completed at the time the error occurred. Error rates were calculated per patient, per treated volume (≥1 volume per patient), and per fraction delivered. The association between tumor site and error was analyzed. Logistic regression was used to examine the association between technical factors and the risk of error. Results: Over the study interval, there were 555 errors among 28,136 patient treatments delivered (error rate per patient = 1.97%, 95% confidence interval [CI], 1.81-2.14%) and among 43,302 treated volumes (error rate per volume = 1.28%, 95% CI, 1.18-1.39%). The proportion of fractions with errors from July 1, 2000, to December 31, 2002, was 0.29% (95% CI, 0.27-0.32%). Patients with sarcoma or head-and-neck tumors experienced error rates significantly higher than average (5.54% and 4.58%, respectively); however, when the number of treated volumes was taken into account, the head-and-neck error rate was no longer higher than average (1.43%). The use of accessories was associated with an increased risk of error, and internal wedges were more likely to be associated with an error than external wedges (relative risk = 2.04; 95% CI, 1.11-3.77%). Eighty-seven errors (15.6%) were directly attributed to incorrect programming of the 'record and verify' system. Changes to planning and treatment processes aimed at reducing errors within the head-and-neck site group produced a substantial reduction in the error rate. Conclusions: Errors in the delivery of RT are uncommon and usually of little clinical significance. Patient subgroups and

  20. Errors in laboratory medicine: practical lessons to improve patient safety.

    Science.gov (United States)

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification

  1. SUPRA SOFT SEPARATION AXIOMS AND SUPRA IRRESOLUTENESS BASED ON SUPRA B-SOFT SETS

    OpenAIRE

    Abd El-latif, Alaa Mohamed; Hosny, Rodyna Ahmed

    2016-01-01

    This paper introduces supra soft b-separation axioms based on the supra b-open soft sets which are more general than supra open soft sets. We investigate the relationships between these supra soft separation axioms. Furthermore, with the help of examples it is established that the converse does not hold. We show that, a supra soft topological space (X; t;E) is supra soft b-T1-space, if xE is supra b-closed soft set in for each x 2 X. Also, we prove that xE is supra b-closed soft set for each ...

  2. Soft tissue sarcoma - diagnosis and treatment

    International Nuclear Information System (INIS)

    Ruka, W.; Rutkowski, P.; Krzakowski, M.

    2009-01-01

    Significant progress in the treatment of soft tissue sarcoma (STS), both primary tumor and local recurrences/metastatic disease, has been achieved in recent years. Surgery is essential modality, but the use of combined treatment (standard combination of surgery with adjuvant radiotherapy, chemotherapy in selected cases and perioperative rehabilitation) in highly-experienced centers increased possibility of cure and limitations of extent of local surgery. Current combined therapy together with the use of reconstructive methods allows for limb-sparing surgery in majority of soft tissue sarcoma patients (amputation in 10% of cases as compared to approximately 50% in the 1960 - 70s). The slow, but constant, increase of rate of soft tissue sarcoma patients with long-term survival has been observed. Contemporary 5-year overall survival rate in patients with extremity soft tissue sarcomas is 55 -78%. In case of diagnosis of metastatic disease the prognosis is still poor (survival of approximately 1 year). Good results of local therapy may be expected only after planned (e.g., after preoperative biopsy - tru - cut or incisional) radical surgical excision of primary tumor with pathologically negative margins (R0 resection). Following appropriate diagnostic check-up, adjuvant radiotherapy is necessary in the majority of patients treated with radical surgery need, as well as long-term rehabilitation and follow-up examinations in treating center are needed for at least 5 years. The progress is due to the introduction of targeted therapy acting on molecular or genetic cellular disturbances detected during studies on etiopathogenetic mechanisms of sarcoma subtypes. In view of rarity of sarcomas and necessity of multidisciplinary therapy, the crucial issue is that management of these tumors should be hold in experienced oncological sarcoma centers. (authors)

  3. Soft lubrication

    Science.gov (United States)

    Skotheim, Jan; Mahadevan, Laksminarayanan

    2004-11-01

    We study the lubrication of fluid-immersed soft interfaces and show that elastic deformation couples tangential and normal forces and thus generates lift. We consider materials that deform easily, due to either geometry (e.g a shell) or constitutive properties (e.g. a gel or a rubber), so that the effects of pressure and temperature on the fluid properties may be neglected. Four different system geometries are considered: a rigid cylinder moving tangentially to a soft layer coating a rigid substrate; a soft cylinder moving tangentially to a rigid substrate; a cylindrical shell moving tangentially to a rigid substrate; and finally a journal bearing coated with a thin soft layer, which being a conforming contact allows us to gauge the influence of contact geometry. In addition, for the particular case of a soft layer coating a rigid substrate we consider both elastic and poroelastic material responses. Finally, we consider the role of contact geometry in the context of the journal bearing, a conforming contact. For all these cases we find the same generic behavior: there is an optimal combination of geometric and material parameters that maximizes the dimensionless normal force as a function of the softness.

  4. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    Science.gov (United States)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  5. Decreasing patient identification band errors by standardizing processes.

    Science.gov (United States)

    Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie

    2013-04-01

    Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.

  6. Soft Congruence Relations over Rings

    Science.gov (United States)

    Xin, Xiaolong; Li, Wenting

    2014-01-01

    Molodtsov introduced the concept of soft sets, which can be seen as a new mathematical tool for dealing with uncertainty. In this paper, we initiate the study of soft congruence relations by using the soft set theory. The notions of soft quotient rings, generalized soft ideals and generalized soft quotient rings, are introduced, and several related properties are investigated. Also, we obtain a one-to-one correspondence between soft congruence relations and idealistic soft rings and a one-to-one correspondence between soft congruence relations and soft ideals. In particular, the first, second, and third soft isomorphism theorems are established, respectively. PMID:24949493

  7. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  8. Soft Tissue Sarcoma

    Science.gov (United States)

    ... muscles, tendons, fat, and blood vessels. Soft tissue sarcoma is a cancer of these soft tissues. There ... have certain genetic diseases. Doctors diagnose soft tissue sarcomas with a biopsy. Treatments include surgery to remove ...

  9. Portrayals of branded soft drinks in popular American movies: a content analysis

    Directory of Open Access Journals (Sweden)

    Bell Robert A

    2006-03-01

    Full Text Available Abstract Background This study examines the portrayals of soft drinks in popular American movies as a potential vehicle for global marketing and an indicator of covert product placement. Methods We conducted a content analysis of America's top-ten grossing films from 1991 through 2000 that included portrayals of beverages (95 movies total. Coding reliabilities were assessed with Cohen's kappa, and exceeded 0.80. If there was at least one instance of branding for a beverage, the film was considered having branded beverages. Fisher's exact test was used to determine if soft drink portrayals were related to audience rating or genre. Data on the amount of time soft drinks appeared onscreen was log transformed to satisfy the assumption of normality, and analyzed using a repeated measures ANOVA model. McNemar's test of agreement was used to test whether branded soft drinks are as likely to appear or to be actor-endorsed compared to other branded beverages. Results Rating was not associated with portrayals of branded soft drinks, but comedies were most likely to include a branded soft drink (p = 0.0136. Branded soft drinks appeared more commonly than other branded non-alcoholic beverages (p = 0.0001, branded beer (p = 0.0004, and other branded alcoholic beverages (p = 0.0006. Actors consumed branded soft drinks in five times the number of movies compared to their consumption of other branded non-alcoholic beverages (p = 0.0126. About half the revenue from the films with portrayals of branded soft drinks come from film sales outside the U.S. Conclusion The frequent appearance of branded soft drinks provides indirect evidence that product placement is a common practice for American-produced films shown in the U.S. and other countries.

  10. Error and discrepancy in radiology: inevitable or avoidable?

    Science.gov (United States)

    Brady, Adrian P

    2017-02-01

    Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3-5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms "error" and "discrepancy" and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and system-based. Possible strategies to minimise error are considered, along with the means of dealing with perceived underperformance when it is identified. The inevitability of imperfection is explained, while the importance of striving to minimise such imperfection is emphasised. • Discrepancies between radiology reports and subsequent patient outcomes are not inevitably errors. • Radiologist reporting performance cannot be perfect, and some errors are inevitable. • Error or discrepancy in radiology reporting does not equate negligence. • Radiologist errors occur for many reasons, both human- and system-derived. • Strategies exist to minimise error causes and to learn from errors made.

  11. Error analysis in predictive modelling demonstrated on mould data.

    Science.gov (United States)

    Baranyi, József; Csernus, Olívia; Beczner, Judit

    2014-01-17

    The purpose of this paper was to develop a predictive model for the effect of temperature and water activity on the growth rate of Aspergillus niger and to determine the sources of the error when the model is used for prediction. Parallel mould growth curves, derived from the same spore batch, were generated and fitted to determine their growth rate. The variances of replicate ln(growth-rate) estimates were used to quantify the experimental variability, inherent to the method of determining the growth rate. The environmental variability was quantified by the variance of the respective means of replicates. The idea is analogous to the "within group" and "between groups" variability concepts of ANOVA procedures. A (secondary) model, with temperature and water activity as explanatory variables, was fitted to the natural logarithm of the growth rates determined by the primary model. The model error and the experimental and environmental errors were ranked according to their contribution to the total error of prediction. Our method can readily be applied to analysing the error structure of predictive models of bacterial growth models, too. © 2013.

  12. Design of a Soft Robot with Multiple Motion Patterns Using Soft Pneumatic Actuators

    Science.gov (United States)

    Miao, Yu; Dong, Wei; Du, Zhijiang

    2017-11-01

    Soft robots are made of soft materials and have good flexibility and infinite degrees of freedom in theory. These properties enable soft robots to work in narrow space and adapt to external environment. In this paper, a 2-DOF soft pneumatic actuator is introduced, with two chambers symmetrically distributed on both sides and a jamming cylinder along the axis. Fibers are used to constrain the expansion of the soft actuator. Experiments are carried out to test the performance of the soft actuator, including bending and elongation characteristics. A soft robot is designed and fabricated by connecting four soft pneumatic actuators to a 3D-printed board. The soft robotic system is then established. The pneumatic circuit is built by pumps and solenoid valves. The control system is based on the control board Arduino Mega 2560. Relay modules are used to control valves and pressure sensors are used to measure pressure in the pneumatic circuit. Experiments are conducted to test the performance of the proposed soft robot.

  13. 16-bit error detection and correction (EDAC) controller design using FPGA for critical memory applications

    International Nuclear Information System (INIS)

    Misra, M.K.; Sridhar, N.; Krishnakumar, B.; Ilango Sambasivan, S.

    2002-01-01

    Full text: Complex electronic systems require the utmost reliability, especially when the storage and retrieval of critical data demands faultless operation, the system designer must strive for the highest reliability possible. Extra effort must be expended to achieve this reliability. Fortunately, not all systems must operate with these ultra reliability requirements. The majority of systems operate in an area where system failure is not hazardous. But the applications like nuclear reactors, medical and avionics are the areas where system failure may prove to have harsh consequences. High-density memories generate errors in their stored data due to external disturbances like power supply surges, system noise, natural radiation etc. These errors are called soft errors or transient errors, since they don't cause permanent damage to the memory cell. Hard errors may also occur on system memory boards. These hard errors occur if one RAM component or RAM cell fails and is stuck at either 0 or 1. Although less frequent, hard errors may cause a complete system failure. These are the major problems associated with memories

  14. Analysis of family-wise error rates in statistical parametric mapping using random field theory.

    Science.gov (United States)

    Flandin, Guillaume; Friston, Karl J

    2017-11-01

    This technical report revisits the analysis of family-wise error rates in statistical parametric mapping-using random field theory-reported in (Eklund et al. []: arXiv 1511.01863). Contrary to the understandable spin that these sorts of analyses attract, a review of their results suggests that they endorse the use of parametric assumptions-and random field theory-in the analysis of functional neuroimaging data. We briefly rehearse the advantages parametric analyses offer over nonparametric alternatives and then unpack the implications of (Eklund et al. []: arXiv 1511.01863) for parametric procedures. Hum Brain Mapp, 2017. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  15. Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.

    Science.gov (United States)

    Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun

    2018-01-01

    Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.

  16. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  17. Soft-sediment mullions

    Science.gov (United States)

    Ortner, Hugo

    2015-04-01

    mullions form. In coarse conglomerates, meter-scale mullions were observed, in sandstones centimeter-scale mullions. There does not seem to exist a relationship to the rate of shortening, as the size of mullions is independent of their position in larger scale folds, or in slump complexes or tectonic folds. Anketell, J.M., Cegla, J. & Dzulynsky, S. (1970): On the deformational structures in systems with reversed density gradients. Ann. Soc. Geol. Pol., 40(1): 3-30. Alsop, G.I., Marco, S., 2014. Fold and fabric relationships in temporally and spatially evolving slump systems: A multi-cell flow model. Jour. Struct. Geol., 63(0): 27-49. Dzulynsky, S. (1966): Sedimentary structures resulting from convection-like pattern of motion. Ann. Soc. Geol. Pol., 36(1): 3-21. Dzulinsky, S. & Simpson, F. (1966): Experiments on interfacial current markings. Geol. Rom., 5: 197 - 214. Ortner, H. (2007): Styles of soft-sediment deformation on top of a growing fold system in the Gosau Group at Muttekopf, Northern Calcareous Alps, Austria: Slumping versus tectonic deformation. Sed. Geol., 196: 99-118. Urai, J.L., Spaeth, G., Van der Zee, W. & Hilger, C. (2001): Evolution of mullion (boudin) structures in the Variscan of the Ardennes and Eifel. Jour. Virt. Expl., 3: 1-16.

  18. Soft Drink Vending Machines in Schools: A Clear and Present Danger

    Science.gov (United States)

    Price, James; Murnan, Judy; Moore, Bradene

    2006-01-01

    This paper examines the availability of soft drinks in schools ("pouring rights contracts") and its effects on the growing nutritional problems of American youth. Of special concern is the prevalence of overweight youth, which has been increasing at alarming rates. There has been a direct relationship found between soft drink consumption and…

  19. Prescribing errors in a Brazilian neonatal intensive care unit

    Directory of Open Access Journals (Sweden)

    Ana Paula Cezar Machado

    2015-12-01

    Full Text Available Abstract Pediatric patients, especially those admitted to the neonatal intensive care unit (ICU, are highly vulnerable to medication errors. This study aimed to measure the prescription error rate in a university hospital neonatal ICU and to identify susceptible patients, types of errors, and the medicines involved. The variables related to medicines prescribed were compared to the Neofax prescription protocol. The study enrolled 150 newborns and analyzed 489 prescription order forms, with 1,491 medication items, corresponding to 46 drugs. Prescription error rate was 43.5%. Errors were found in dosage, intervals, diluents, and infusion time, distributed across 7 therapeutic classes. Errors were more frequent in preterm newborns. Diluent and dosing were the most frequent sources of errors. The therapeutic classes most involved in errors were antimicrobial agents and drugs that act on the nervous and cardiovascular systems.

  20. Soft Skills in Higher Education: Importance and Improvement Ratings as a Function of Individual Differences and Academic Performance

    Science.gov (United States)

    Chamorro-Premuzic, Tomas; Arteche, Adriane; Bremner, Andrew J.; Greven, Corina; Furnham, Adrian

    2010-01-01

    Three UK studies on the relationship between a purpose-built instrument to assess the importance and development of 15 "soft skills" are reported. "Study 1" (N = 444) identified strong latent components underlying these soft skills, such that differences "between-skills" were over-shadowed by differences…

  1. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  2. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.

  3. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Science.gov (United States)

    Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn

    2013-01-01

    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID).

  4. Prediction of embankment settlement over soft soils.

    Science.gov (United States)

    2009-06-01

    The objective of this project was to review and verify the current design procedures used by TxDOT : to estimate the total and rate of consolidation settlement in embankments constructed on soft soils. Methods : to improve the settlement predictions ...

  5. Female listeners’ autonomic responses to dramatic shifts between loud and soft music/sound passages: a study of heavy metal songs

    Directory of Open Access Journals (Sweden)

    Tzu-Han eCheng

    2016-02-01

    Full Text Available Although music and the emotion it conveys unfold over time, little is known about how listeners respond to shifts in musical emotions. A special technique in heavy metal music utilizes dramatic shifts between loud and soft passages. Loud passages are penetrated by distorted sounds conveying aggression, whereas soft passages are often characterized by a clean, calm singing voice and light accompaniment. The present study used heavy metal songs and soft sea sounds to examine how female listeners’ respiration rates and heart rates responded to the arousal changes associated with auditory stimuli. The high-frequency power of heart rate variability (HF-HRV was used to assess cardiac parasympathetic activity. The results showed that the soft passages of heavy metal songs and soft sea sounds expressed lower arousal and induced significantly higher HF-HRVs than the loud passages of heavy metal songs. Listeners’ respiration rate was determined by the arousal level of the present music passage, whereas the heart rate was dependent on both the present and preceding passages. Compared with soft sea sounds, the loud music passage led to greater deceleration of the heart rate at the beginning of the following soft music passage. The sea sounds delayed the heart rate acceleration evoked by the following loud music passage. The data provide evidence that sound-induced parasympathetic activity affects listener’s heart rate in response to the following music passage. These findings have potential implications for future research of the temporal dynamics of musical emotions.

  6. X-ray scattering of soft matter

    International Nuclear Information System (INIS)

    Stribeck, N.

    2007-01-01

    This coherently written volume summarizes the analytical power of modern X-ray scattering in the field of soft matter. Applications of X-ray scattering to soft matter have advanced considerably within recent years, both conceptually and technically. There are now mature high-power X-ray sources, synchrotrons and rotating anodes, as well as high-speed detectors, which have become readily available and which make the whole process more viable. High-quality time-resolved experiments on polymer structure can now be performed with ease, a major advancement due to the genuine power of the scattering method. This manual is a detailed description of simple tools that can elucidate the mechanisms of structure evolution in the studied materials. It is also a step-by-step guide to more advanced methods of the latest X-ray scattering techniques, and breaks down these methods. Data analysis based on clear, unequivocal results is rendered simple and straightforward - with a stress on the careful planning of experiments and adequate recording of all required data. This book, then, serves as a useful ready-reference guide. It has been written for the modern scientist who is a generalist and needs a concise reference, and demonstrates typical errors in data evaluation. (orig.)

  7. Application of multibounce attenuated total reflectance fourier transform infrared spectroscopy and chemometrics for determination of aspartame in soft drinks.

    Science.gov (United States)

    Khurana, Harpreet Kaur; Cho, Il Kyu; Shim, Jae Yong; Li, Qing X; Jun, Soojin

    2008-02-13

    Aspartame is a low-calorie sweetener commonly used in soft drinks; however, the maximum usage dose is limited by the U.S. Food and Drug Administration. Fourier transform infrared (FTIR) spectroscopy with attenuated total reflectance sampling accessory and partial least-squares regression (PLS) was used for rapid determination of aspartame in soft drinks. On the basis of spectral characterization, the highest R2 value, and lowest PRESS value, the spectral region between 1600 and 1900 cm(-1) was selected for quantitative estimation of aspartame. The potential of FTIR spectroscopy for aspartame quantification was examined and validated by the conventional HPLC method. Using the FTIR method, aspartame contents in four selected carbonated diet soft drinks were found to average from 0.43 to 0.50 mg/mL with prediction errors ranging from 2.4 to 5.7% when compared with HPLC measurements. The developed method also showed a high degree of accuracy because real samples were used for calibration, thus minimizing potential interference errors. The FTIR method developed can be suitably used for routine quality control analysis of aspartame in the beverage-manufacturing sector.

  8. Soft-decision decoding of RS codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2005-01-01

    By introducing a few simplifying assumptions we derive a simple condition for successful decoding using the Koetter-Vardy algorithm for soft-decision decoding of RS codes. We show that the algorithm has a significant advantage over hard decision decoding when the code rate is low, when two or more...

  9. Data-driven soft sensor design with multiple-rate sampled data: a comparative study

    DEFF Research Database (Denmark)

    Lin, Bao; Recke, Bodil; Schmidt, Torben M.

    2009-01-01

    to design quality soft sensors for cement kiln processes using data collected from a simulator and a plant log system. Preliminary results reveal that the WPLS approach is able to provide accurate one-step-ahead prediction. The regularized data lifting technique predicts the product quality of cement kiln...

  10. Reliability of perceived neighbourhood conditions and the effects of measurement error on self-rated health across urban and rural neighbourhoods.

    Science.gov (United States)

    Pruitt, Sandi L; Jeffe, Donna B; Yan, Yan; Schootman, Mario

    2012-04-01

    Limited psychometric research has examined the reliability of self-reported measures of neighbourhood conditions, the effect of measurement error on associations between neighbourhood conditions and health, and potential differences in the reliabilities between neighbourhood strata (urban vs rural and low vs high poverty). We assessed overall and stratified reliability of self-reported perceived neighbourhood conditions using five scales (social and physical disorder, social control, social cohesion, fear) and four single items (multidimensional neighbouring). We also assessed measurement error-corrected associations of these conditions with self-rated health. Using random-digit dialling, 367 women without breast cancer (matched controls from a larger study) were interviewed twice, 2-3 weeks apart. Test-retest (intraclass correlation coefficients (ICC)/weighted κ) and internal consistency reliability (Cronbach's α) were assessed. Differences in reliability across neighbourhood strata were tested using bootstrap methods. Regression calibration corrected estimates for measurement error. All measures demonstrated satisfactory internal consistency (α ≥ 0.70) and either moderate (ICC/κ=0.41-0.60) or substantial (ICC/κ=0.61-0.80) test-retest reliability in the full sample. Internal consistency did not differ by neighbourhood strata. Test-retest reliability was significantly lower among rural (vs urban) residents for two scales (social control, physical disorder) and two multidimensional neighbouring items; test-retest reliability was higher for physical disorder and lower for one multidimensional neighbouring item among the high (vs low) poverty strata. After measurement error correction, the magnitude of associations between neighbourhood conditions and self-rated health were larger, particularly in the rural population. Research is needed to develop and test reliable measures of perceived neighbourhood conditions relevant to the health of rural populations.

  11. Continuous quantum error correction for non-Markovian decoherence

    International Nuclear Information System (INIS)

    Oreshkov, Ognyan; Brun, Todd A.

    2007-01-01

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximately follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics

  12. Bacterial Infection Potato Tuber Soft Rot Disease Detection Based on Electronic Nose

    Directory of Open Access Journals (Sweden)

    Chang Zhiyong

    2017-11-01

    Full Text Available Soft rot is a severe bacterial disease of potatoes, and soft rot infection can cause significant economic losses during the storage period of potatoes. In this study, potato soft rot was selected as the research object, and a type of potato tuber soft rot disease early detection method based on the electronic nose technology was proposed. An optimized bionic electronic nose gas chamber and a scientific and reasonable sampling device were designed to detect a change in volatile substances of the infected soft rot disease of potato tuber. The infection of soft rot disease in potato tuber samples was detected and identified by using the RBF NN algorithm and SVM algorithm. The results revealed that the proposed bionic electronic nose system can be utilized for early detection of potato tuber soft rot disease. Through comparison and analysis, the recognition rate using the SVM algorithm reached up to 89.7%, and the results were superior to the RBF NN algorithm.

  13. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    dictionary plays a key role in the speech recognition accuracy. .... Sophisticated microphone is used for the recording speech corpus in a noise free environment. .... values, word error rate (WER) and error-rate will be calculated as follows:.

  14. A probabilistic model-based soft sensor to monitor lactic acid bacteria fermentations

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2018-01-01

    A probabilistic soft sensor based on a mechanistic model was designed to monitor S. thermophilus fermentations, and validated with experimental lab-scale data. It considered uncertainties in the initial conditions, on-line measurements, and model parameters by performing Monte Carlo simulations...... the model parameters that were then used as input to the mechanistic model. The soft sensor predicted both the current state variables, as well as the future course of the fermentation, e.g. with a relative mean error of the biomass concentration of 8 %. This successful implementation of a process...... within the monitoring system. It predicted, therefore, the probability distributions of the unmeasured states, such as biomass, lactose, and lactic acid concentrations. To this end, a mechanistic model was developed first, and a statistical parameter estimation was performed in order to assess parameter...

  15. Low energy (soft) x rays

    International Nuclear Information System (INIS)

    Hoshi, Masaharu; Antoku, Shigetoshi; Russell, W.J.; Miller, R.C.; Nakamura, Nori; Mizuno, Masayoshi; Nishio, Shoji.

    1987-05-01

    Dosimetry of low-energy (soft) X rays produced by the SOFTEX Model CMBW-2 was performed using Nuclear Associates Type 30 - 330 PTW, Exradin Type A2, and Shonka-Wyckoff ionization chambers with a Keithley Model 602 electrometer. Thermoluminescent (BeO chip) dosimeters were used with a Harshaw Detector 2000-A and Picoammeter-B readout system. Beam quality measurements were made using aluminum absorbers; exposure rates were assessed by the current of the X-ray tube and by exposure times. Dose distributions were established, and the average factors for non-uniformity were calculated. The means of obtaining accurate absorbed and exposed doses using these methods are discussed. Survival of V79 cells was assessed by irradiating them with soft X rays, 200 kVp X rays, and 60 Co gamma rays. The relative biological effectiveness (RBE) values for soft X rays with 0, 0.2, 0.7 mm added thicknesses of aluminum were 1.6, which were compared to 60 Co. The RBE of 200 kVp X rays relative to 60 Co was 1.3. Results of this study are available for reference in future RERF studies of cell survival. (author)

  16. Downlink Error Rates of Half-duplex Users in Full-duplex Networks over a Laplacian Inter-User Interference Limited and EGK fading

    KAUST Repository

    Soury, Hamza; Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have

  17. Naming game with learning errors in communications

    OpenAIRE

    Lou, Yang; Chen, Guanrong

    2014-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network topology. By pair-wise iterative interactions, the population reaches a consensus state asymptotically. In this paper, we study naming game with communication errors during pair-wise conversations, where errors are represented by error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed....

  18. Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.

    Science.gov (United States)

    Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C

    2013-12-30

    We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.

  19. Quality assurance of the international computerised 24 h dietary recall method (EPIC-Soft).

    Science.gov (United States)

    Crispim, Sandra P; Nicolas, Genevieve; Casagrande, Corinne; Knaze, Viktoria; Illner, Anne-Kathrin; Huybrechts, Inge; Slimani, Nadia

    2014-02-01

    The interview-administered 24 h dietary recall (24-HDR) EPIC-Soft® has a series of controls to guarantee the quality of dietary data across countries. These comprise all steps that are part of fieldwork preparation, data collection and data management; however, a complete characterisation of these quality controls is still lacking. The present paper describes in detail the quality controls applied in EPIC-Soft, which are, to a large extent, built on the basis of the EPIC-Soft error model and are present in three phases: (1) before, (2) during and (3) after the 24-HDR interviews. Quality controls for consistency and harmonisation are implemented before the interviews while preparing the seventy databases constituting an EPIC-Soft version (e.g. pre-defined and coded foods and recipes). During the interviews, EPIC-Soft uses a cognitive approach by helping the respondent to recall the dietary intake information in a stepwise manner and includes controls for consistency (e.g. probing questions) as well as for completeness of the collected data (e.g. system calculation for some unknown amounts). After the interviews, a series of controls can be applied by dietitians and data managers to further guarantee data quality. For example, the interview-specific 'note files' that were created to track any problems or missing information during the interviews can be checked to clarify the information initially provided. Overall, the quality controls employed in the EPIC-Soft methodology are not always perceivable, but prove to be of assistance for its overall standardisation and possibly for the accuracy of the collected data.

  20. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    Science.gov (United States)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  1. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  2. Advancing the research agenda for diagnostic error reduction.

    Science.gov (United States)

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  3. [Medication error management climate and perception for system use according to construction of medication error prevention system].

    Science.gov (United States)

    Kim, Myoung Soo

    2012-08-01

    The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.

  4. Power penalties for multi-level PAM modulation formats at arbitrary bit error rates

    Science.gov (United States)

    Kaliteevskiy, Nikolay A.; Wood, William A.; Downie, John D.; Hurley, Jason; Sterlingov, Petr

    2016-03-01

    There is considerable interest in combining multi-level pulsed amplitude modulation formats (PAM-L) and forward error correction (FEC) in next-generation, short-range optical communications links for increased capacity. In this paper we derive new formulas for the optical power penalties due to modulation format complexity relative to PAM-2 and due to inter-symbol interference (ISI). We show that these penalties depend on the required system bit-error rate (BER) and that the conventional formulas overestimate link penalties. Our corrections to the standard formulas are very small at conventional BER levels (typically 1×10-12) but become significant at the higher BER levels enabled by FEC technology, especially for signal distortions due to ISI. The standard formula for format complexity, P = 10log(L-1), is shown to overestimate the actual penalty for PAM-4 and PAM-8 by approximately 0.1 and 0.25 dB respectively at 1×10-3 BER. Then we extend the well-known PAM-2 ISI penalty estimation formula from the IEEE 802.3 standard 10G link modeling spreadsheet to the large BER case and generalize it for arbitrary PAM-L formats. To demonstrate and verify the BER dependence of the ISI penalty, a set of PAM-2 experiments and Monte-Carlo modeling simulations are reported. The experimental results and simulations confirm that the conventional formulas can significantly overestimate ISI penalties at relatively high BER levels. In the experiments, overestimates up to 2 dB are observed at 1×10-3 BER.

  5. Energy efficiency of error correction on wireless systems

    NARCIS (Netherlands)

    Havinga, Paul J.M.

    1999-01-01

    Since high error rates are inevitable to the wireless environment, energy-efficient error-control is an important issue for mobile computing systems. We have studied the energy efficiency of two different error correction mechanisms and have measured the efficiency of an implementation in software.

  6. A web-based team-oriented medical error communication assessment tool: development, preliminary reliability, validity, and user ratings.

    Science.gov (United States)

    Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas

    2011-01-01

    Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.

  7. [Dan'e-fukang soft extract for dysmenorrhea: a meta-analysis].

    Science.gov (United States)

    Yu, Kun; Zhang, Zhen-dong; Xiao, Zheng; Wei, Wei; Wang, Zheng-long

    2014-07-01

    To assess the efficacy and safety of Dan'e-fukang soft extract for dysmenorrhea by meta-analysis. Cochrane Controlled Trials Register, PubMed, EMBASE, CBM, VIP, Wanfang Data, and CNKI databases were searched. Results of randomized controlled trials were also harvested from pharmaceutical companies by manual search. Meta-analysis was carried out according to the method provided by the Cochrane Collaboration with RevMan5.0 software. Twelve Chinese papers were selected, and 1213 patients were included. Significant difference in recovery rate was found between Dan'e-fukang soft extract group and other drugs group (RR=1.33, 95%CI: 1.02-1.75, P0.05). No statistical difference was noticed in total effective rate between two groups (RR=1.04, 95%CI: 1.00-1.08, P>0.05). A statistical difference in improvement of dysmenorrhea symptoms was found before and after treatment in both Dan'e-fukang soft extract group and other drugs group (MD=5.79, 95%CI: 5.01-6.56, P0.05) and after treatment (MD=-0.94, 95%CI: -2.11-0.23, P>0.05). Oral administration of Dan'e-fukang soft extract caused only mild gastrointestinal discomfort, but other drugs had more adverse effects including serious gastrointestinal reaction, severe liver dysfunction, vaginal bleeding, and female masculinity. The existing evidence shows that Dan'e-fukang soft extract has the same efficacy as other drugs in treatment of dysmenorrheal. Because of the quality of the included studies was limited, the evidence of the efficacy and safety of Dan'e-fukang soft extract was not strong, and high-quality randomized trials with large samples are needed.

  8. Fully Soft 3D-Printed Electroactive Fluidic Valve for Soft Hydraulic Robots.

    Science.gov (United States)

    Zatopa, Alex; Walker, Steph; Menguc, Yigit

    2018-06-01

    Soft robots are designed to utilize their compliance and contortionistic abilities to both interact safely with their environment and move through it in ways a rigid robot cannot. To more completely achieve this, the robot should be made of as many soft components as possible. Here we present a completely soft hydraulic control valve consisting of a 3D-printed photopolymer body with electrorheological (ER) fluid as a working fluid and gallium-indium-tin liquid metal alloy as electrodes. This soft 3D-printed ER valve weighs less than 10 g and allows for onboard actuation control, furthering the goal of an entirely soft controllable robot. The soft ER valve pressure-holding capabilities were tested under unstrained conditions, cyclic valve activation, and the strained conditions of bending, twisting, stretching, and indentation. It was found that the max holding pressure of the valve when 5 kV was applied across the electrodes was 264 kPa, and that the holding pressure deviated less than 15% from the unstrained max holding pressure under all strain conditions except for indentation, which had a 60% max pressure increase. In addition, a soft octopus-like robot was designed, 3D printed, and assembled, and a soft ER valve was used to stop the fluid flow, build pressure in the robot, and actuate six tentacle-like soft bending actuators.

  9. Soft matter physics

    CERN Document Server

    Doi, Masao

    2013-01-01

    Soft matter (polymers, colloids, surfactants and liquid crystals) are an important class of materials in modern technology. They also form the basis of many future technologies, for example in medical and environmental applications. Soft matter shows complex behaviour between fluids and solids, and used to be a synonym of complex materials. Due to the developments of the past two decades, soft condensed matter can now be discussed on the same sound physical basis as solid condensedmatter. The purpose of this book is to provide an overview of soft matter for undergraduate and graduate students

  10. Electronic prescribing reduces prescribing error in public hospitals.

    Science.gov (United States)

    Shawahna, Ramzi; Rahman, Nisar-Ur; Ahmad, Mahmood; Debray, Marcel; Yliperttula, Marjo; Declèves, Xavier

    2011-11-01

    To examine the incidence of prescribing errors in a main public hospital in Pakistan and to assess the impact of introducing electronic prescribing system on the reduction of their incidence. Medication errors are persistent in today's healthcare system. The impact of electronic prescribing on reducing errors has not been tested in developing world. Prospective review of medication and discharge medication charts before and after the introduction of an electronic inpatient record and prescribing system. Inpatient records (n = 3300) and 1100 discharge medication sheets were reviewed for prescribing errors before and after the installation of electronic prescribing system in 11 wards. Medications (13,328 and 14,064) were prescribed for inpatients, among which 3008 and 1147 prescribing errors were identified, giving an overall error rate of 22·6% and 8·2% throughout paper-based and electronic prescribing, respectively. Medications (2480 and 2790) were prescribed for discharge patients, among which 418 and 123 errors were detected, giving an overall error rate of 16·9% and 4·4% during paper-based and electronic prescribing, respectively. Electronic prescribing has a significant effect on the reduction of prescribing errors. Prescribing errors are commonplace in Pakistan public hospitals. The study evaluated the impact of introducing electronic inpatient records and electronic prescribing in the reduction of prescribing errors in a public hospital in Pakistan. © 2011 Blackwell Publishing Ltd.

  11. Social deviance activates the brain's error-monitoring system.

    Science.gov (United States)

    Kim, Bo-Rin; Liss, Alison; Rao, Monica; Singer, Zachary; Compton, Rebecca J

    2012-03-01

    Social psychologists have long noted the tendency for human behavior to conform to social group norms. This study examined whether feedback indicating that participants had deviated from group norms would elicit a neural signal previously shown to be elicited by errors and monetary losses. While electroencephalograms were recorded, participants (N = 30) rated the attractiveness of 120 faces and received feedback giving the purported average rating made by a group of peers. The feedback was manipulated so that group ratings either were the same as a participant's rating or deviated by 1, 2, or 3 points. Feedback indicating deviance from the group norm elicited a feedback-related negativity, a brainwave signal known to be elicited by objective performance errors and losses. The results imply that the brain treats deviance from social norms as an error.

  12. Botanicals to Control Soft Rot Bacteria of Potato

    Directory of Open Access Journals (Sweden)

    M. M. Rahman

    2012-01-01

    Full Text Available Extracts from eleven different plant species such as jute (Corchorus capsularis L., cheerota (Swertia chiraita Ham., chatim (Alstonia scholaris L., mander (Erythrina variegata, bael (Aegle marmelos L., marigold (Tagetes erecta, onion (Allium cepa, garlic (Allium sativum L., neem (Azadiracta indica, lime (Citrus aurantifolia, and turmeric (Curcuma longa L. were tested for antibacterial activity against potato soft rot bacteria, E. carotovora subsp. carotovora (Ecc P-138, under in vitro and storage conditions. Previously, Ecc P-138 was identified as the most aggressive soft rot bacterium in Bangladeshi potatoes. Of the 11 different plant extracts, only extracts from dried jute leaves and cheerota significantly inhibited growth of Ecc P-138 in vitro. Finally, both plant extracts were tested to control the soft rot disease of potato tuber under storage conditions. In a 22-week storage condition, the treated potatoes were significantly more protected against the soft rot infection than those of untreated samples in terms of infection rate and weight loss. The jute leaf extracts showed more pronounced inhibitory effects on Ecc-138 growth both in in vitro and storage experiments.

  13. Multilayer beam splitter used in a soft X-ray Mach-Zehnder interferometer at working wavelength of 13.9 nm

    International Nuclear Information System (INIS)

    Zhang Zhong; Wang Zhanshan; Wang Hongchang; Wang Fengli; Wu Wenjuan; Zhang Shumin; Qin Shuji; Chen Lingyan

    2006-01-01

    The soft X-ray Mach-Zehnder interferometer is an important tool in measuring the electron densities of laser-produced plasma near the critical surface. The design, fabrication and characterization of multilayer beam splitters at 13.9 nm for soft X-ray Mach-Zehnder interferometer are presented in the paper. The design of beam splitter is completed based on the standard of maximizing product of reflectivity and transmission of the beam splitter at 13.9 nm. The beam splitters, which are Mo/Si multi-layer deposited on 10 mm x 10 mm area, 100 nm thickness Si 3 N 4 membranes, are fabricated using the magnetron sputtering. A method based on extended He-Ne laser beam is developed to analyze the figure error of the beam splitters. The data measured by an optical profiler prove that the method based on visible light is effective to analyze the figure of the beam splitters. The rms figure error of a beam splitter reaches 1.757 nm in the center area 3.82 mm x 3.46 mm and satisfies the need of soft X-ray interference experiment. The product of reflectivity and transmission measured by synchrotron radiation is near to 4%. The Mach-Zehnder interferometer at 13.9 nm based on the multilayer beam splitters is used in 13.9 nm soft X-ray laser interference experiment, in which a clear interferograms of C 8 H 8 laser-produced plasma is got. (authors)

  14. Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.

    Science.gov (United States)

    Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D

    2016-10-01

    Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  15. Soft buckling actuators

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Dian; Whitesides, George M.

    2017-12-26

    A soft actuator is described, including: a rotation center having a center of mass; a plurality of bucklable, elastic structural components each comprising a wall defining an axis along its longest dimension, the wall connected to the rotation center in a way that the axis is offset from the center of mass in a predetermined direction; and a plurality of cells each disposed between two adjacent bucklable, elastic structural components and configured for connection with a fluid inflation or deflation source; wherein upon the deflation of the cell, the bucklable, elastic structural components are configured to buckle in the predetermined direction. A soft actuating device including a plurality of the soft actuators and methods of actuation using the soft actuator or soft actuating device disclosed herein are also described.

  16. A soft sensor for bioprocess control based on sequential filtering of metabolic heat signals.

    Science.gov (United States)

    Paulsson, Dan; Gustavsson, Robert; Mandenius, Carl-Fredrik

    2014-09-26

    Soft sensors are the combination of robust on-line sensor signals with mathematical models for deriving additional process information. Here, we apply this principle to a microbial recombinant protein production process in a bioreactor by exploiting bio-calorimetric methodology. Temperature sensor signals from the cooling system of the bioreactor were used for estimating the metabolic heat of the microbial culture and from that the specific growth rate and active biomass concentration were derived. By applying sequential digital signal filtering, the soft sensor was made more robust for industrial practice with cultures generating low metabolic heat in environments with high noise level. The estimated specific growth rate signal obtained from the three stage sequential filter allowed controlled feeding of substrate during the fed-batch phase of the production process. The biomass and growth rate estimates from the soft sensor were also compared with an alternative sensor probe and a capacitance on-line sensor, for the same variables. The comparison showed similar or better sensitivity and lower variability for the metabolic heat soft sensor suggesting that using permanent temperature sensors of a bioreactor is a realistic and inexpensive alternative for monitoring and control. However, both alternatives are easy to implement in a soft sensor, alone or in parallel.

  17. A Soft Sensor for Bioprocess Control Based on Sequential Filtering of Metabolic Heat Signals

    Directory of Open Access Journals (Sweden)

    Dan Paulsson

    2014-09-01

    Full Text Available Soft sensors are the combination of robust on-line sensor signals with mathematical models for deriving additional process information. Here, we apply this principle to a microbial recombinant protein production process in a bioreactor by exploiting bio-calorimetric methodology. Temperature sensor signals from the cooling system of the bioreactor were used for estimating the metabolic heat of the microbial culture and from that the specific growth rate and active biomass concentration were derived. By applying sequential digital signal filtering, the soft sensor was made more robust for industrial practice with cultures generating low metabolic heat in environments with high noise level. The estimated specific growth rate signal obtained from the three stage sequential filter allowed controlled feeding of substrate during the fed-batch phase of the production process. The biomass and growth rate estimates from the soft sensor were also compared with an alternative sensor probe and a capacitance on-line sensor, for the same variables. The comparison showed similar or better sensitivity and lower variability for the metabolic heat soft sensor suggesting that using permanent temperature sensors of a bioreactor is a realistic and inexpensive alternative for monitoring and control. However, both alternatives are easy to implement in a soft sensor, alone or in parallel.

  18. Prescribing errors during hospital inpatient care: factors influencing identification by pharmacists.

    Science.gov (United States)

    Tully, Mary P; Buchan, Iain E

    2009-12-01

    To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.

  19. Organizational safety culture and medical error reporting by Israeli nurses.

    Science.gov (United States)

    Kagan, Ilya; Barnoy, Sivia

    2013-09-01

    To investigate the association between patient safety culture (PSC) and the incidence and reporting rate of medical errors by Israeli nurses. Self-administered structured questionnaires were distributed to a convenience sample of 247 registered nurses enrolled in training programs at Tel Aviv University (response rate = 91%). The questionnaire's three sections examined the incidence of medication mistakes in clinical practice, the reporting rate for these errors, and the participants' views and perceptions of the safety culture in their workplace at three levels (organizational, departmental, and individual performance). Pearson correlation coefficients, t tests, and multiple regression analysis were used to analyze the data. Most nurses encountered medical errors from a daily to a weekly basis. Six percent of the sample never reported their own errors, while half reported their own errors "rarely or sometimes." The level of PSC was positively and significantly correlated with the error reporting rate. PSC, place of birth, error incidence, and not having an academic nursing degree were significant predictors of error reporting, together explaining 28% of variance. This study confirms the influence of an organizational safety climate on readiness to report errors. Senior healthcare executives and managers can make a major impact on safety culture development by creating and promoting a vision and strategy for quality and safety and fostering their employees' motivation to implement improvement programs at the departmental and individual level. A positive, carefully designed organizational safety culture can encourage error reporting by staff and so improve patient safety. © 2013 Sigma Theta Tau International.

  20. Aspergillus: a rare primary organism in soft-tissue infections.

    Science.gov (United States)

    Johnson, M A; Lyle, G; Hanly, M; Yeh, K A

    1998-02-01

    Nonclostridial necrotizing soft-tissue infections are usually polymicrobial, with greater than 90 per cent involving beta-hemolytic streptococci or coagulase-positive staphylococci. The remaining 10 per cent are usually due to Gram-negative enteric pathogens. We describe the case of a 46-year-old woman with bilateral lower extremity fungal soft tissue infections. She underwent multiple surgical debridements of extensive gangrenous necrosis of the skin and subcutaneous fat associated with severe acute arteritis. Histopathological examination revealed Aspergillus niger as the sole initial pathogen. Despite aggressive surgical debridement, allografts, and intravenous amphotericin B, her condition clinically deteriorated and she ultimately died of overwhelming infection. Treatment for soft-tissue infections include surgical debridement and intravenous antibiotics. More specifically, Aspergillus can be treated with intravenous amphotericin B, 5-fluorocytosine, and rifampin. Despite these treatment modalities, necrotizing fascitis is associated with a 60 per cent mortality rate. Primary fungal pathogens should be included in the differential diagnosis of soft-tissue infections.

  1. System care improves trauma outcome: patient care errors dominate reduced preventable death rate.

    Science.gov (United States)

    Thoburn, E; Norris, P; Flores, R; Goode, S; Rodriguez, E; Adams, V; Campbell, S; Albrink, M; Rosemurgy, A

    1993-01-01

    A review of 452 trauma deaths in Hillsborough County, Florida, in 1984 documented that 23% of non-CNS trauma deaths were preventable and occurred because of inadequate resuscitation or delay in proper surgical care. In late 1988 Hillsborough County organized a County Trauma Agency (HCTA) to coordinate trauma care among prehospital providers and state-designated trauma centers. The purpose of this study was to review county trauma deaths after the inception of the HCTA to determine the frequency of preventable deaths. 504 trauma deaths occurring between October 1989 and April 1991 were reviewed. Through committee review, 10 deaths were deemed preventable; 2 occurred outside the trauma system. Of the 10 deaths, 5 preventable deaths occurred late in severely injured patients. The preventable death rate has decreased to 7.0% with system care. The causes of preventable deaths have changed from delayed or inadequate intervention to postoperative care errors.

  2. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Extreme Mechanics in Soft Pneumatic Robots and Soft Microfluidic Electronics and Sensors

    Science.gov (United States)

    Majidi, Carmel

    2012-02-01

    In the near future, machines and robots will be completely soft, stretchable, impact resistance, and capable of adapting their shape and functionality to changes in mission and environment. Similar to biological tissue and soft-body organisms, these next-generation technologies will contain no rigid parts and instead be composed entirely of soft elastomers, gels, fluids, and other non-rigid matter. Using a combination of rapid prototyping tools, microfabrication methods, and emerging techniques in so-called ``soft lithography,'' scientists and engineers are currently introducing exciting new families of soft pneumatic robots, soft microfluidic sensors, and hyperelastic electronics that can be stretched to as much as 10x their natural length. Progress has been guided by an interdisciplinary collection of insights from chemistry, life sciences, robotics, microelectronics, and solid mechanics. In virtually every technology and application domain, mechanics and elasticity have a central role in governing functionality and design. Moreover, in contrast to conventional machines and electronics, soft pneumatic systems and microfluidics typically operate in the finite deformation regime, with materials stretching to several times their natural length. In this talk, I will review emerging paradigms in soft pneumatic robotics and soft microfluidic electronics and highlight modeling and design challenges that arise from the extreme mechanics of inflation, locomotion, sensor operation, and human interaction. I will also discuss perceived challenges and opportunities in a broad range of potential application, from medicine to wearable computing.

  4. Indentation and Observation of Anisotropic Soft Tissues Using an Indenter Device

    Directory of Open Access Journals (Sweden)

    Parinaz ASHRAFI

    2015-01-01

    Full Text Available Soft tissues of human body have complex structures and different mechanical behaviors than those of traditional engineering materials. There is a great urge to understand tissue behavior of human body. Experimental data is needed for improvement of soft tissue modeling and advancement in implants and prosthesis, as well as diagnosis of diseases. Mechanical behavior and responses change when tissue loses its liveliness and viability. One of the techniques for soft tissue testing is indentation, which is applied on live tissue in its physiological environment. Indentation affords several advantages over other types of tests such as uniaxial tension, biaxial tension, and simple shear and suction, thus it is of interest to develop new indentation techniques from which more valid data can be extracted. In this study a new indenter device was designed and constructed. Displacement and force rate cyclic loading, and relaxation experiments were conducted on human arm. The in-vivo force rate controlled cyclic loading test method which is novel is compared with the traditional displacement controlled cyclic loading tests. Anisotropic behavior of tissue cannot be determined by axisymmetric tips, therefore ellipsoid tips were used for examining anisotropy and inplane material direction of bulk soft tissues

  5. Ternary NiFeX as soft biasing film in a magnetoresistive sensor

    Science.gov (United States)

    Chen, Mao-Min; Gharsallah, Neila; Gorman, Grace L.; Latimer, Jacquie

    1991-04-01

    The properties of NiFeX ternary films (X being Al, Au, Nb, Pd, Pt, Si, and Zr) have been studied for soft-film biasing of the magnetoresistive (MR) trilayer sensor. In general, the addition of the element X into the NiFe alloy film decreases the saturation magnetization Bs and magnetoresistance coefficient of the film, while increasing the film's electrical resistivity ρ. One of the desirable properties of a soft film for biasing is high sheet resistance for minimum current flow. A figure of merit Bsρ that takes into account both the rate of increase in Bs and the rate of decrease in ρ when adding X element was derived to compare the effectiveness of various X elements in reducing the current shunting through the soft-film layer. Using this criterion, NiFeNb and NiFeZr emerge as good soft-film materials having a maximum sheet resistance relative to the MR layer. Other critical properties such as magnetoresistance coefficient, magnetostriction, coercivity, and anisotropy field were also examined and are discussed in this paper.

  6. EPIDEMIOLOGY AND SURVIVAL OF PATIENTS WITH MALIGNANT TUMORS OF CONNECTIVE AND SOFT TISSUE

    Directory of Open Access Journals (Sweden)

    V. M. Merabishvili

    2015-01-01

    Full Text Available Introduction. Malignant tumors of connective and soft tissue are met relatively rare, although in general in Russia each year more than 1.500 new cases are registered. On five administrative territories of Russia during a year there are recorded less than 5 new cases of malignant tumors of connective and soft tissue (Yamal-Nenets A.R. – 4; Tuva Republic – 0, Magadan Region – 3; Chukotka A.R. – 0; Jewish A.R. – 4. More seldom data on these patients’ survival are published. Purpose of study. To estimate dynamics of incidence of malignant tumors of connective and soft tissue on the basis of public reporting, to calculate the index accuracy and observed and relative survival rates by histological forms, including sarcomas. Material and methods. To perform a detailed study there were selected, for two periods of observation, respectively 1054 patients (1995–2001 and 919 patients (2002–2008. Estimation of survival was carried out using software, which had been developed together with Ltd. «Novel» (Director – T.L.Tsvetkova, Ph.D.. results of study. The most typical incidence rate for of malignant tumors of connective and soft tissue (S47, 49 that are presented by  cancer registries of different countries is from 1.5 to 2.5 0/   in men and 1.5–2.0 0/   in women. Dynamics  of morbidity of the Russian population, Moscow and St. Petersburg indicates that the level of standardized  incidence rates is in the range of 2.0 0/   in men and within 1.5 0/   in women. The mortality rate in 2013  was respectively for men and women in Russia in total 1.7 0/   and 1.13 0/   , in Moscow – 1.42 0/   and  1.24 0/   , in St. Petersburg – 1.88 0/   and 1.26 0/   . The index accuracy for both sexes in Russia is 0.88,  in Moscow – 1.2; in St. Petersburg – 1.4. This index should be used for the site of these diseases with high fatality. According to official data a one-year lethality of patients with tumors of connective and soft

  7. Selection of anchor values for human error probability estimation

    International Nuclear Information System (INIS)

    Buffardi, L.C.; Fleishman, E.A.; Allen, J.A.

    1989-01-01

    There is a need for more dependable information to assist in the prediction of human errors in nuclear power environments. The major objective of the current project is to establish guidelines for using error probabilities from other task settings to estimate errors in the nuclear environment. This involves: (1) identifying critical nuclear tasks, (2) discovering similar tasks in non-nuclear environments, (3) finding error data for non-nuclear tasks, and (4) establishing error-rate values for the nuclear tasks based on the non-nuclear data. A key feature is the application of a classification system to nuclear and non-nuclear tasks to evaluate their similarities and differences in order to provide a basis for generalizing human error estimates across tasks. During the first eight months of the project, several classification systems have been applied to a sample of nuclear tasks. They are discussed in terms of their potential for establishing task equivalence and transferability of human error rates across situations

  8. Rates of medical errors and preventable adverse events among hospitalized children following implementation of a resident handoff bundle.

    Science.gov (United States)

    Starmer, Amy J; Sectish, Theodore C; Simon, Dennis W; Keohane, Carol; McSweeney, Maireade E; Chung, Erica Y; Yoon, Catherine S; Lipsitz, Stuart R; Wassner, Ari J; Harper, Marvin B; Landrigan, Christopher P

    2013-12-04

    Handoff miscommunications are a leading cause of medical errors. Studies comprehensively assessing handoff improvement programs are lacking. To determine whether introduction of a multifaceted handoff program was associated with reduced rates of medical errors and preventable adverse events, fewer omissions of key data in written handoffs, improved verbal handoffs, and changes in resident-physician workflow. Prospective intervention study of 1255 patient admissions (642 before and 613 after the intervention) involving 84 resident physicians (42 before and 42 after the intervention) from July-September 2009 and November 2009-January 2010 on 2 inpatient units at Boston Children's Hospital. Resident handoff bundle, consisting of standardized communication and handoff training, a verbal mnemonic, and a new team handoff structure. On one unit, a computerized handoff tool linked to the electronic medical record was introduced. The primary outcomes were the rates of medical errors and preventable adverse events measured by daily systematic surveillance. The secondary outcomes were omissions in the printed handoff document and resident time-motion activity. Medical errors decreased from 33.8 per 100 admissions (95% CI, 27.3-40.3) to 18.3 per 100 admissions (95% CI, 14.7-21.9; P < .001), and preventable adverse events decreased from 3.3 per 100 admissions (95% CI, 1.7-4.8) to 1.5 (95% CI, 0.51-2.4) per 100 admissions (P = .04) following the intervention. There were fewer omissions of key handoff elements on printed handoff documents, especially on the unit that received the computerized handoff tool (significant reductions of omissions in 11 of 14 categories with computerized tool; significant reductions in 2 of 14 categories without computerized tool). Physicians spent a greater percentage of time in a 24-hour period at the patient bedside after the intervention (8.3%; 95% CI 7.1%-9.8%) vs 10.6% (95% CI, 9.2%-12.2%; P = .03). The average duration of verbal

  9. A-Soft Separation Axioms in Soft Topological Space

    Directory of Open Access Journals (Sweden)

    Luay Abd –Al-Hani Al-Sweedi

    2018-01-01

    Full Text Available The important science tools in a different kinds and  specialties , that considered the basic mainstay of ( the set theory and because of  huge development in all life fields. This causes great problems , that need solution and parallel tools for those developments , so the scientis become responsible to work on the development of number theory and open new horizons , that a new science had appeared  which is  ( soft figures theory which is considered the important tool to solve most difficult problems or overcome them ,in these sciences and their specific life specialization, economy, medicine , geometry and others. Also the theory of soft numbers had entered in general topology in power full and active way. The last years a new science has appeared is (soft topological space.              The main idea of this research is to define the separation axioms in  (soft topological space  and practically in certain point , and to study the most important  properties and results of it .

  10. The Diagnostic and Prognostic Value of Hematological and Chemical Abnormalities in Soft Tissue Sarcoma: A Comparative Study in Patients with Benign and Malignant Soft Tissue Tumors.

    Science.gov (United States)

    Ariizumi, Takashi; Kawashima, Hiroyuki; Ogose, Akira; Sasaki, Taro; Hotta, Tetsuo; Hatano, Hiroshi; Morita, Tetsuro; Endo, Naoto

    2018-01-01

    The value of routine blood tests in malignant soft tissue tumors remains uncertain. To determine if these tests can be used for screening, the routine pretreatment blood test findings were retrospectively investigated in 359 patients with benign and malignant soft tissue tumors. Additionally, the prognostic potential of pretreatment blood abnormalities was evaluated in patients with soft tissue sarcomas. We compared clinical factors and blood tests findings between patients with benign and malignant soft tissue tumors using univariate and multivariate analysis. Subsequently, patients with malignant tumors were divided into two groups based on blood test reference values, and the prognostic significance of each parameter was evaluated. In the univariate analysis, age, tumor size, and tumor depth were significant clinical diagnostic factors. Significant increases in the granulocyte count, C-reactive protein (CRP) level, erythrocyte sedimentation rate (ESR), and γ-glutamyl transpeptidase (γ-GTP) levels were found in patients with malignant soft tissue tumors. Multiple logistic regression showed that tumor size and ESR were independent factors that predicted malignant soft tissue tumors. The Kaplan-Meier survival analysis revealed that granulocyte counts, γ-GTP levels, and CRP levels correlated significantly with overall survival. Thus, pretreatment routine blood tests are useful diagnostic and prognostic markers for diagnosing soft tissue sarcoma. © 2018 by the Association of Clinical Scientists, Inc.

  11. Bangladesh looks for a soft loan

    International Nuclear Information System (INIS)

    Hossain, A.

    1986-01-01

    The problems faced by developing countries in embarking on a nuclear power programme are considered. It is argued that an international funding agency should be set up by the IAEA and the World Bank to provide developing countries with help in the form of a loan at soft interest rates and longer repayment periods. (U.K.)

  12. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  13. Quantification of human errors in level-1 PSA studies in NUPEC/JINS

    International Nuclear Information System (INIS)

    Hirano, M.; Hirose, M.; Sugawara, M.; Hashiba, T.

    1991-01-01

    THERP (Technique for Human Error Rate Prediction) method is mainly adopted to evaluate the pre-accident and post-accident human error rates. Performance shaping factors are derived by taking Japanese operational practice into account. Several examples of human error rates with calculational procedures are presented. The important human interventions of typical Japanese NPPs are also presented. (orig./HP)

  14. A soft X-ray source based on a low divergence, high repetition rate ultraviolet laser

    Science.gov (United States)

    Crawford, E. A.; Hoffman, A. L.; Milroy, R. D.; Quimby, D. C.; Albrecht, G. F.

    The CORK code is utilized to evaluate the applicability of low divergence ultraviolet lasers for efficient production of soft X-rays. The use of the axial hydrodynamic code wih one ozone radial expansion to estimate radial motion and laser energy is examined. The calculation of ionization levels of the plasma and radiation rates by employing the atomic physics and radiation model included in the CORK code is described. Computations using the hydrodynamic code to determine the effect of laser intensity, spot size, and wavelength on plasma electron temperature are provided. The X-ray conversion efficiencies of the lasers are analyzed. It is observed that for a 1 GW laser power the X-ray conversion efficiency is a function of spot size, only weakly dependent on pulse length for time scales exceeding 100 psec, and better conversion efficiencies are obtained at shorter wavelengths. It is concluded that these small lasers focused to 30 micron spot sizes and 10 to the 14th W/sq cm intensities are useful sources of 1-2 keV radiation.

  15. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Systematic error growth rate peak is observed at wavenumber 2 up to 4-day forecast then .... the influence of summer systematic error and ran- ... total exchange. When the error energy budgets are examined in spectral domain, one may ask ques- tions on the error growth at a certain wavenum- ber from its interaction with ...

  16. Teamwork and clinical error reporting among nurses in Korean hospitals.

    Science.gov (United States)

    Hwang, Jee-In; Ahn, Jeonghoon

    2015-03-01

    To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitoring, mutual support, and communication. Using logistic regression analysis, we determined the relationships between teamwork and error reporting. The response rate was 85.5%. The mean score of teamwork was 3.5 out of 5. At the subscale level, mutual support was rated highest, while leadership was rated lowest. Of the participating nurses, 522 responded that they had experienced at least one clinical error in the last 6 months. Among those, only 53.0% responded that they always or usually reported clinical errors to their managers and/or the patient safety department. Teamwork was significantly associated with better error reporting. Specifically, nurses with a higher team communication score were more likely to report clinical errors to their managers and the patient safety department (odds ratio = 1.82, 95% confidence intervals [1.05, 3.14]). Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety. Copyright © 2015. Published by Elsevier B.V.

  17. The effectiveness of risk management program on pediatric nurses' medication error.

    Science.gov (United States)

    Dehghan-Nayeri, Nahid; Bayat, Fariba; Salehi, Tahmineh; Faghihzadeh, Soghrat

    2013-09-01

    Medication therapy is one of the most complex and high-risk clinical processes that nurses deal with. Medication error is the most common type of error that brings about damage and death to patients, especially pediatric ones. However, these errors are preventable. Identifying and preventing undesirable events leading to medication errors are the main risk management activities. The aim of this study was to investigate the effectiveness of a risk management program on the pediatric nurses' medication error rate. This study is a quasi-experimental one with a comparison group. In this study, 200 nurses were recruited from two main pediatric hospitals in Tehran. In the experimental hospital, we applied the risk management program for a period of 6 months. Nurses of the control hospital did the hospital routine schedule. A pre- and post-test was performed to measure the frequency of the medication error events. SPSS software, t-test, and regression analysis were used for data analysis. After the intervention, the medication error rate of nurses at the experimental hospital was significantly lower (P error-reporting rate was higher (P medical environment, applying the quality-control programs such as risk management can effectively prevent the occurrence of the hospital undesirable events. Nursing mangers can reduce the medication error rate by applying risk management programs. However, this program cannot succeed without nurses' cooperation.

  18. Verifying Stability of Dynamic Soft-Computing Systems

    Science.gov (United States)

    Wen, Wu; Napolitano, Marcello; Callahan, John

    1997-01-01

    Soft computing is a general term for algorithms that learn from human knowledge and mimic human skills. Example of such algorithms are fuzzy inference systems and neural networks. Many applications, especially in control engineering, have demonstrated their appropriateness in building intelligent systems that are flexible and robust. Although recent research have shown that certain class of neuro-fuzzy controllers can be proven bounded and stable, they are implementation dependent and difficult to apply to the design and validation process. Many practitioners adopt the trial and error approach for system validation or resort to exhaustive testing using prototypes. In this paper, we describe our on-going research towards establishing necessary theoretic foundation as well as building practical tools for the verification and validation of soft-computing systems. A unified model for general neuro-fuzzy system is adopted. Classic non-linear system control theory and recent results of its applications to neuro-fuzzy systems are incorporated and applied to the unified model. It is hoped that general tools can be developed to help the designer to visualize and manipulate the regions of stability and boundedness, much the same way Bode plots and Root locus plots have helped conventional control design and validation.

  19. Prevalence of Soft Tissue Calcifications in CBCT Images of Mandibular Region.

    Science.gov (United States)

    Khojastepour, Leila; Haghnegahdar, Abdolaziz; Sayar, Hamed

    2017-06-01

    Most of the soft tissue calcifications within the head and neck region might not be accompanied by clinical symptoms but may indicate some pathological conditions. The aim of this research was to determine the prevalence of soft tissue calcifications in cone beam computed tomography (CBCT) images of mandibular region. In this cross sectional study the CBCT images of 602 patients including 294 men and 308 women with mean age 41.38±15.18 years were evaluated regarding the presence, anatomical location; type (single or multiple) and size of soft tissue calcification in mandibular region. All CBCT images were acquired by NewTom VGi scanner. Odds ratio and chi-square tests were used for data analysis and p < 0.05 was considered to be statistically significant. 156 out of 602 patients had at least one soft tissue calcification in their mandibular region (25.9%. of studied population with mean age 51.7±18.03 years). Men showed significantly higher rate of soft tissue calcification than women (30.3% vs. 21.8%). Soft tissue calcification was predominantly seen at posterior region of the mandible (88%) and most of them were single (60.7%). The prevalence of soft tissue calcification increased with age. Most of the detected soft tissue calcifications were smaller than 3mm (90%). Soft tissue calcifications in mandibular area were a relatively common finding especially in posterior region and more likely to happen in men and in older age group.

  20. Novel experimentally observed phenomena in soft matter

    Indian Academy of Sciences (India)

    The resulting flow is non-Newtonian and is characterized by features such as shear rate-dependent viscosities and nonzero normal stresses. This article begins with an introduction to some unusual flow properties displayed by soft matter. Experiments that report a spectrum of novel phenomena exhibited by these materials, ...

  1. Energy efficiency of error correcting mechanisms for wireless communications

    NARCIS (Netherlands)

    Havinga, Paul J.M.

    We consider the energy efficiency of error control mechanisms for wireless communication. Since high error rates are inevitable to the wireless environment, energy efficient error control is an important issue for mobile computing systems. Although good designed retransmission schemes can be optimal

  2. Forward error correction based on algebraic-geometric theory

    CERN Document Server

    A Alzubi, Jafar; M Chen, Thomas

    2014-01-01

    This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.

  3. Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors.

    Science.gov (United States)

    Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter

    2016-08-24

    Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design.

  4. The Nature of Error in Adolescent Student Writing

    Science.gov (United States)

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  5. Errors in dual-energy X-ray scanning of the hip because of nonuniform fat distribution.

    Science.gov (United States)

    Tothill, Peter; Weir, Nicholas; Loveland, John

    2014-01-01

    The variable proportion of fat in overlying soft tissue is a potential source of error in dual-energy X-ray absorptiometry (DXA) measurements of bone mineral. The effect on spine scanning has previously been assessed from cadaver studies and from computed tomography (CT) scans of soft tissue distribution. We have now applied the latter technique to DXA hip scanning. The CT scans performed for clinical purposes were used to derive mean adipose tissue thicknesses over bone and background areas for total hip and femoral neck. The former was always lower. More importantly, the fat thickness differences varied among subjects. Errors because of bone marrow fat were deduced from CT measurements of marrow thickness and assumed fat proportions of marrow. The effect of these differences on measured bone mineral density was deduced from phantom measurements of the bone equivalence of fat. Uncertainties of around 0.06g/cm(2) are similar to those previously reported for spine scanning and the results from cadaver measurements. They should be considered in assessing the diagnostic accuracy of DXA scanning. Copyright © 2014 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  6. Errors in radiographic recognition in the emergency room

    International Nuclear Information System (INIS)

    Britton, C.A.; Cooperstein, L.A.

    1986-01-01

    For 6 months we monitored the frequency and type of errors in radiographic recognition made by radiology residents on call in our emergency room. A relatively low error rate was observed, probably because the authors evaluated cognitive errors only, rather than include those of interpretation. The most common missed finding was a small fracture, particularly on the hands or feet. First-year residents were most likely to make an error, but, interestingly, our survey revealed a small subset of upper-level residents who made a disproportionate number of errors

  7. Mappings on Neutrosophic Soft Classes

    Directory of Open Access Journals (Sweden)

    Shawkat Alkhazaleh

    2014-03-01

    Full Text Available In 1995 Smarandache introduced the concept of neutrosophic set which is a mathematical tool for handling problems involving imprecise, indeterminacy and inconsistent data. In 2013 Maji introduced the concept of neutrosophic soft set theory as a general mathematical tool for dealing with uncertainty. In this paper we define the notion of a mapping on classes where the neutrosophic soft classes are collections of neutrosophic soft set. We also define and study the properties of neutrosophic soft images and neutrosophic soft inverse images of neutrosophic soft sets.

  8. Haptic communication between humans is tuned by the hard or soft mechanics of interaction

    Science.gov (United States)

    Usai, Francesco; Ganesh, Gowrishankar; Sanguineti, Vittorio; Burdet, Etienne

    2018-01-01

    To move a hard table together, humans may coordinate by following the dominant partner’s motion [1–4], but this strategy is unsuitable for a soft mattress where the perceived forces are small. How do partners readily coordinate in such differing interaction dynamics? To address this, we investigated how pairs tracked a target using flexion-extension of their wrists, which were coupled by a hard, medium or soft virtual elastic band. Tracking performance monotonically increased with a stiffer band for the worse partner, who had higher tracking error, at the cost of the skilled partner’s muscular effort. This suggests that the worse partner followed the skilled one’s lead, but simulations show that the results are better explained by a model where partners share movement goals through the forces, whilst the coupling dynamics determine the capacity of communicable information. This model elucidates the versatile mechanism by which humans can coordinate during both hard and soft physical interactions to ensure maximum performance with minimal effort. PMID:29565966

  9. SoftLab: A Soft-Computing Software for Experimental Research with Commercialization Aspects

    Science.gov (United States)

    Akbarzadeh-T, M.-R.; Shaikh, T. S.; Ren, J.; Hubbell, Rob; Kumbla, K. K.; Jamshidi, M

    1998-01-01

    SoftLab is a software environment for research and development in intelligent modeling/control using soft-computing paradigms such as fuzzy logic, neural networks, genetic algorithms, and genetic programs. SoftLab addresses the inadequacies of the existing soft-computing software by supporting comprehensive multidisciplinary functionalities from management tools to engineering systems. Furthermore, the built-in features help the user process/analyze information more efficiently by a friendly yet powerful interface, and will allow the user to specify user-specific processing modules, hence adding to the standard configuration of the software environment.

  10. The alpha effect

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    Much of the recent interest in RAM system reliability stems from concern over alpha particle soft error rates reported for the initial 64 k RAMs. With increasing memory density likely in the next few years the problem of soft errors is rearing its head again. A few years ago ITT carried out experiments on 16k RAMs and found no significant problems. However, recent tests have shown a raise in the number of soft errors with 64k RAMs, and the launch of 256k and 512k memories is likely to make the problem acute

  11. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    Science.gov (United States)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  12. Refractive errors in children and adolescents in Bucaramanga (Colombia).

    Science.gov (United States)

    Galvis, Virgilio; Tello, Alejandro; Otero, Johanna; Serrano, Andrés A; Gómez, Luz María; Castellanos, Yuly

    2017-01-01

    The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia). This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D). Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  13. Refractive errors in children and adolescents in Bucaramanga (Colombia

    Directory of Open Access Journals (Sweden)

    Virgilio Galvis

    Full Text Available ABSTRACT Purpose: The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia. Methods: This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. Results: One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D. Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. Conclusions: The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  14. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1982-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR 1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determines HEPs for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  15. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1981-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR-1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determined HEP's for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  16. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  17. A Simulation Analysis of Errors in the Measurement of Standard Electrochemical Rate Constants from Phase-Selective Impedance Data.

    Science.gov (United States)

    1987-09-30

    RESTRICTIVE MARKINGSC Unclassif ied 2a SECURIly CLASSIFICATION ALIIMOA4TY 3 DIS1RSBj~jiOAVAILAB.I1Y OF RkPORI _________________________________ Approved...of the AC current, including the time dependence at a growing DME, at a given fixed potential either in the presence or the absence of an...the relative error in k b(app) is ob relatively small for ks (true) : 0.5 cm s-, and increases rapidly for ob larger rate constants as kob reaches the

  18. Medical errors in hospitalized pediatric trauma patients with chronic health conditions

    Directory of Open Access Journals (Sweden)

    Xiaotong Liu

    2014-01-01

    Full Text Available Objective: This study compares medical errors in pediatric trauma patients with and without chronic conditions. Methods: The 2009 Kids’ Inpatient Database, which included 123,303 trauma discharges, was analyzed. Medical errors were identified by International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes. The medical error rates per 100 discharges and per 1000 hospital days were calculated and compared between inpatients with and without chronic conditions. Results: Pediatric trauma patients with chronic conditions experienced a higher medical error rate compared with patients without chronic conditions: 4.04 (95% confidence interval: 3.75–4.33 versus 1.07 (95% confidence interval: 0.98–1.16 per 100 discharges. The rate of medical error differed by type of chronic condition. After controlling for confounding factors, the presence of a chronic condition increased the adjusted odds ratio of medical error by 37% if one chronic condition existed (adjusted odds ratio: 1.37, 95% confidence interval: 1.21–1.5, and 69% if more than one chronic condition existed (adjusted odds ratio: 1.69, 95% confidence interval: 1.48–1.53. In the adjusted model, length of stay had the strongest association with medical error, but the adjusted odds ratio for chronic conditions and medical error remained significantly elevated even when accounting for the length of stay, suggesting that medical complexity has a role in medical error. Higher adjusted odds ratios were seen in other subgroups. Conclusion: Chronic conditions are associated with significantly higher rate of medical errors in pediatric trauma patients. Future research should evaluate interventions or guidelines for reducing the risk of medical errors in pediatric trauma patients with chronic conditions.

  19. Savannah River Site human error data base development for nonreactor nuclear facilities

    International Nuclear Information System (INIS)

    Benhardt, H.C.; Held, J.E.; Olsen, L.M.; Vail, R.E.; Eide, S.A.

    1994-01-01

    As part of an overall effort to upgrade and streamline methodologies for safety analyses of nonreactor nuclear facilities at the Savannah River Site (SRS), a human error data base has been developed and is presented in this report. The data base fulfills several needs of risk analysts supporting safety analysis report (SAR) development. First, it provides a single source for probabilities or rates for a wide variety of human errors associated with the SRS nonreactor nuclear facilities. Second, it provides a documented basis for human error probabilities or rates. And finally, it provides actual SRS-specific human error data to support many of the error probabilities or rates. Use of a single, documented reference source for human errors, supported by SRS-specific human error data, will improve the consistency and accuracy of human error modeling by SRS risk analysts. It is envisioned that SRS risk analysts will use this report as both a guide to identifying the types of human errors that may need to be included in risk models such as fault and event trees, and as a source for human error probabilities or rates. For each human error in this report, ffime different mean probabilities or rates are presented to cover a wide range of conditions and influencing factors. The ask analysts must decide which mean value is most appropriate for each particular application. If other types of human errors are needed for the risk models, the analyst must use other sources. Finally, if human enors are dominant in the quantified risk models (based on the values obtained fmm this report), then it may be appropriate to perform detailed human reliability analyses (HRAS) for the dominant events. This document does not provide guidance for such refined HRAS; in such cases experienced human reliability analysts should be involved

  20. Fabrication of the multilayer beam splitters with large area for soft X-ray laser interferometer

    International Nuclear Information System (INIS)

    Wang Zhanshan; Zhang Zhong; Wang Fengli; Wu Wenjuan; Wang Hongchang; Qin Shuji; Chen Lingyan

    2004-01-01

    The soft X-ray laser Mach-Zehnder interferometer is an important tool to measure the electron densities of a laser-produced plasma near the critical surface. The design of a multilayer beam splitter at 13.9 nm for soft X-ray laser Mach-Zehnder interferometer is completed based on the standard of maximizing product of reflectivity and transmission of the beam splitter. The beam splitters which is Mo/Si multilayers on 10 mm x 10 mm area Si 3 N 4 membrane are fabricated using the magnetron sputtering. The figure error of the beam splitter has reached the deep nanometer magnitude by using optical profiler and the product of reflectivity and transmission measured by synchrotron radiation is up to to 4%. (authors)

  1. A review of setup error in supine breast radiotherapy using cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au [South Western Clinical School, University of New South Wales, Sydney, New South Wales (Australia); Liverpool and Macarthur Cancer Therapy Centres, New South Wales (Australia); Ingham Institute of Applied Medical Research, Sydney, New South Wales (Australia); Holloway, Lois [South Western Clinical School, University of New South Wales, Sydney, New South Wales (Australia); Liverpool and Macarthur Cancer Therapy Centres, New South Wales (Australia); Ingham Institute of Applied Medical Research, Sydney, New South Wales (Australia); Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales (Australia); Institute of Medical Physics, School of Physics, University of Sydney, Sydney, New South Wales (Australia); Delaney, Geoff P. [South Western Clinical School, University of New South Wales, Sydney, New South Wales (Australia); Liverpool and Macarthur Cancer Therapy Centres, New South Wales (Australia); Ingham Institute of Applied Medical Research, Sydney, New South Wales (Australia)

    2016-10-01

    Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registering CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.

  2. A review of setup error in supine breast radiotherapy using cone-beam computed tomography

    International Nuclear Information System (INIS)

    Batumalai, Vikneswary; Holloway, Lois; Delaney, Geoff P.

    2016-01-01

    Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registering CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.

  3. A framework to estimate probability of diagnosis error in NPP advanced MCR

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Kim, Jong Hyun; Jang, Inseok; Seong, Poong Hyun

    2018-01-01

    Highlights: •As new type of MCR has been installed in NPPs, the work environment is considerably changed. •A new framework to estimate operators’ diagnosis error probabilities should be proposed. •Diagnosis error data were extracted from the full-scope simulator of the advanced MCR. •Using Bayesian inference, a TRC model was updated for use in advanced MCR. -- Abstract: Recently, a new type of main control room (MCR) has been adopted in nuclear power plants (NPPs). The new MCR, known as the advanced MCR, consists of digitalized human-system interfaces (HSIs), computer-based procedures (CPS), and soft controls while the conventional MCR includes many alarm tiles, analog indicators, hard-wired control devices, and paper-based procedures. These changes significantly affect the generic activities of the MCR operators, in relation to diagnostic activities. The aim of this paper is to suggest a framework to estimate the probabilities of diagnosis errors in the advanced MCR by updating a time reliability correlation (TRC) model. Using Bayesian inference, the TRC model was updated with the probabilities of diagnosis errors. Here, the diagnosis error data were collected from a full-scope simulator of the advanced MCR. To do this, diagnosis errors were determined based on an information processing model and their probabilities were calculated. However, these calculated probabilities of diagnosis errors were largely affected by context factors such as procedures, HSI, training, and others, known as PSFs (Performance Shaping Factors). In order to obtain the nominal diagnosis error probabilities, the weightings of PSFs were also evaluated. Then, with the nominal diagnosis error probabilities, the TRC model was updated. This led to the proposal of a framework to estimate the nominal probabilities of diagnosis errors in the advanced MCR.

  4. Evaluation of drug administration errors in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Berdot Sarah

    2012-03-01

    Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  5. Computing in the presence of soft bit errors. [caused by single event upset on spacecraft

    Science.gov (United States)

    Rasmussen, R. D.

    1984-01-01

    It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.

  6. A Computing Method to Determine the Performance of an Ionic Liquid Gel Soft Actuator.

    Science.gov (United States)

    He, Bin; Zhang, Chenghong; Zhou, Yanmin; Wang, Zhipeng

    2018-01-01

    A new type of soft actuator material-an ionic liquid gel (ILG) that consists of BMIMBF 4 , HEMA, DEAP, and ZrO 2 -is polymerized into a gel state under ultraviolet (UV) light irradiation. In this paper, we first propose that the ILG conforms to the assumptions of hyperelastic theory and that the Mooney-Rivlin model can be used to study the properties of the ILG. Under the five-parameter and nine-parameter Mooney-Rivlin models, the formulas for the calculation of the uniaxial tensile stress, plane uniform tensile stress, and 3D directional stress are deduced. The five-parameter and nine-parameter Mooney-Rivlin models of the ILG with a ZrO 2 content of 3 wt% were obtained by uniaxial tensile testing, and the parameters are denoted as c 10 , c 01 , c 20 , c 11 , and c 02 and c 10 , c 01 , c 20 , c 11 , c 02 , c 30 , c 21 , c 12 , and c 03 , respectively. Through the analysis and comparison of the uniaxial tensile stress between the calculated and experimental data, the error between the stress data calculated from the five-parameter Mooney-Rivlin model and the experimental data is less than 0.51%, and the error between the stress data calculated from the nine-parameter Mooney-Rivlin model and the experimental data is no more than 8.87%. Hence, our work presents a feasible and credible formula for the calculation of the stress of the ILG. This work opens a new path to assess the performance of a soft actuator composed of an ILG and will contribute to the optimized design of soft robots.

  7. Time Domain Equalizer Design Using Bit Error Rate Minimization for UWB Systems

    Directory of Open Access Journals (Sweden)

    Syed Imtiaz Husain

    2009-01-01

    Full Text Available Ultra-wideband (UWB communication systems occupy huge bandwidths with very low power spectral densities. This feature makes the UWB channels highly rich in resolvable multipaths. To exploit the temporal diversity, the receiver is commonly implemented through a Rake. The aim to capture enough signal energy to maintain an acceptable output signal-to-noise ratio (SNR dictates a very complicated Rake structure with a large number of fingers. Channel shortening or time domain equalizer (TEQ can simplify the Rake receiver design by reducing the number of significant taps in the effective channel. In this paper, we first derive the bit error rate (BER of a multiuser and multipath UWB system in the presence of a TEQ at the receiver front end. This BER is then written in a form suitable for traditional optimization. We then present a TEQ design which minimizes the BER of the system to perform efficient channel shortening. The performance of the proposed algorithm is compared with some generic TEQ designs and other Rake structures in UWB channels. It is shown that the proposed algorithm maintains a lower BER along with efficiently shortening the channel.

  8. Waking the undead: Implications of a soft explosive model for the timing of placental mammal diversification.

    Science.gov (United States)

    Springer, Mark S; Emerling, Christopher A; Meredith, Robert W; Janečka, Jan E; Eizirik, Eduardo; Murphy, William J

    2017-01-01

    The explosive, long fuse, and short fuse models represent competing hypotheses for the timing of placental mammal diversification. Support for the explosive model, which posits both interordinal and intraordinal diversification after the KPg mass extinction, derives from morphological cladistic studies that place Cretaceous eutherians outside of crown Placentalia. By contrast, most molecular studies favor the long fuse model wherein interordinal cladogenesis occurred in the Cretaceous followed by intraordinal cladogenesis after the KPg boundary. Phillips (2016) proposed a soft explosive model that allows for the emergence of a few lineages (Xenarthra, Afrotheria, Euarchontoglires, Laurasiatheria) in the Cretaceous, but otherwise agrees with the explosive model in positing the majority of interordinal diversification after the KPg mass extinction. Phillips (2016) argues that rate transference errors associated with large body size and long lifespan have inflated previous estimates of interordinal divergence times, and further suggests that most interordinal divergences are positioned after the KPg boundary when rate transference errors are avoided through the elimination of calibrations in large-bodied and/or long lifespan clades. Here, we show that rate transference errors can also occur in the opposite direction and drag forward estimated divergence dates when calibrations in large-bodied/long lifespan clades are omitted. This dragging forward effect results in the occurrence of more than half a billion years of 'zombie lineages' on Phillips' preferred timetree. By contrast with ghost lineages, which are a logical byproduct of an incomplete fossil record, zombie lineages occur when estimated divergence dates are younger than the minimum age of the oldest crown fossils. We also present the results of new timetree analyses that address the rate transference problem highlighted by Phillips (2016) by deleting taxa that exceed thresholds for body size and lifespan

  9. Analysis of the "naming game" with learning errors in communications.

    Science.gov (United States)

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  10. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  11. Soft material for optical storage

    International Nuclear Information System (INIS)

    Lucchetti, L.; Simoni, F.

    2000-01-01

    The aim of transforming electronic networking into optical networking is producing a major effort in studying all optical processing and as a consequence in investigating the nonlinear optical properties of materials for this purpose. In this research area soft materials like polymers and liquid crystals are more and more attractive because they are cheap and they are more easily integrated in microcircuits hardware with respect to the well-known highly nonlinear crystals. Since optical processing spans a too wide field to be treated in one single paper, the authors will focus on one specific subject within this field and give a review of the most recent advances in studying the soft-materials properties interesting for the storage of optical information. The efforts in research of new materials and techniques for optical storage are motivated by the need to store and retrieve large amounts of data with short access time and high data rate at a competitive cost

  12. Soft Neutrosophic Loops and Their Generalization

    Directory of Open Access Journals (Sweden)

    Mumtaz Ali

    2014-06-01

    Full Text Available Soft set theory is a general mathematical tool for dealing with uncertain, fuzzy, not clearly defined objects. In this paper we introduced soft neutrosophic loop,soft neutosophic biloop, soft neutrosophic N -loop with the discuission of some of their characteristics. We also introduced a new type of soft neutrophic loop, the so called soft strong neutrosophic loop which is of pure neutrosophic character. This notion also found in all the other corresponding notions of soft neutrosophic thoery. We also given some of their properties of this newly born soft structure related to the strong part of neutrosophic theory.

  13. SU-E-J-12: An Image-Guided Soft Robotic Patient Positioning System for Maskless Head-And-Neck Cancer Radiotherapy: A Proof-Of-Concept Study

    International Nuclear Information System (INIS)

    Ogunmolu, O; Gans, N; Jiang, S; Gu, X

    2015-01-01

    Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance of the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control

  14. SU-E-J-12: An Image-Guided Soft Robotic Patient Positioning System for Maskless Head-And-Neck Cancer Radiotherapy: A Proof-Of-Concept Study

    Energy Technology Data Exchange (ETDEWEB)

    Ogunmolu, O; Gans, N [The University of Texas at Dallas, Richardson, TX (United States); Jiang, S; Gu, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance of the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control

  15. Analysis of Medication Errors in Simulated Pediatric Resuscitation by Residents

    Directory of Open Access Journals (Sweden)

    Evelyn Porter

    2014-07-01

    Full Text Available Introduction: The objective of our study was to estimate the incidence of prescribing medication errors specifically made by a trainee and identify factors associated with these errors during the simulated resuscitation of a critically ill child. Methods: The results of the simulated resuscitation are described. We analyzed data from the simulated resuscitation for the occurrence of a prescribing medication error. We compared univariate analysis of each variable to medication error rate and performed a separate multiple logistic regression analysis on the significant univariate variables to assess the association between the selected variables. Results: We reviewed 49 simulated resuscitations . The final medication error rate for the simulation was 26.5% (95% CI 13.7% - 39.3%. On univariate analysis, statistically significant findings for decreased prescribing medication error rates included senior residents in charge, presence of a pharmacist, sleeping greater than 8 hours prior to the simulation, and a visual analog scale score showing more confidence in caring for critically ill children. Multiple logistic regression analysis using the above significant variables showed only the presence of a pharmacist to remain significantly associated with decreased medication error, odds ratio of 0.09 (95% CI 0.01 - 0.64. Conclusion: Our results indicate that the presence of a clinical pharmacist during the resuscitation of a critically ill child reduces the medication errors made by resident physician trainees.

  16. The benefits of soft sensor and multi-rate control for the implementation of Wireless Networked Control Systems.

    Science.gov (United States)

    Mansano, Raul K; Godoy, Eduardo P; Porto, Arthur J V

    2014-12-18

    Recent advances in wireless networking technology and the proliferation of industrial wireless sensors have led to an increasing interest in using wireless networks for closed loop control. The main advantages of Wireless Networked Control Systems (WNCSs) are the reconfigurability, easy commissioning and the possibility of installation in places where cabling is impossible. Despite these advantages, there are two main problems which must be considered for practical implementations of WNCSs. One problem is the sampling period constraint of industrial wireless sensors. This problem is related to the energy cost of the wireless transmission, since the power supply is limited, which precludes the use of these sensors in several closed-loop controls. The other technological concern in WNCS is the energy efficiency of the devices. As the sensors are powered by batteries, the lowest possible consumption is required to extend battery lifetime. As a result, there is a compromise between the sensor sampling period, the sensor battery lifetime and the required control performance for the WNCS. This paper develops a model-based soft sensor to overcome these problems and enable practical implementations of WNCSs. The goal of the soft sensor is generating virtual data allowing an actuation on the process faster than the maximum sampling period available for the wireless sensor. Experimental results have shown the soft sensor is a solution to the sampling period constraint problem of wireless sensors in control applications, enabling the application of industrial wireless sensors in WNCSs. Additionally, our results demonstrated the soft sensor potential for implementing energy efficient WNCS through the battery saving of industrial wireless sensors.

  17. Fundamentals of soft robot locomotion.

    Science.gov (United States)

    Calisti, M; Picardi, G; Laschi, C

    2017-05-01

    Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human-robot interaction and locomotion. Although field applications have emerged for soft manipulation and human-robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This paper aims to provide a reference guide for researchers approaching mobile soft robotics, to describe the underlying principles of soft robot locomotion with its pros and cons, and to envisage applications and further developments for mobile soft robotics. © 2017 The Author(s).

  18. Soft-Material Robotics

    OpenAIRE

    Wang, L; Nurzaman, SG; Iida, Fumiya

    2017-01-01

    There has been a boost of research activities in robotics using soft materials in the past ten years. It is expected that the use and control of soft materials can help realize robotic systems that are safer, cheaper, and more adaptable than the level that the conventional rigid-material robots can achieve. Contrary to a number of existing review and position papers on soft-material robotics, which mostly present case studies and/or discuss trends and challenges, the review focuses on the fun...

  19. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    Science.gov (United States)

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Error and discrepancy in radiology: inevitable or avoidable?

    OpenAIRE

    Brady, Adrian P.

    2016-01-01

    Abstract Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3?5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms ?error? and ?discrepancy? and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and ...

  2. Teaching Soft Skills Employers Need

    Science.gov (United States)

    Ellis, Maureen; Kisling, Eric; Hackworth, Robbie G.

    2014-01-01

    This study identifies the soft skills community colleges teach in an office technology course and determines whether the skills taught are congruent with the soft skills employers require in today's entry-level office work. A qualitative content analysis of a community college office technology soft skills course was performed using 23 soft skills…

  3. Silo outflow of soft frictionless spheres

    Science.gov (United States)

    Ashour, Ahmed; Trittel, Torsten; Börzsönyi, Tamás; Stannarius, Ralf

    2017-12-01

    Outflow of granular materials from silos is a remarkably complex physical phenomenon that has been extensively studied with simple objects like monodisperse hard disks in two dimensions (2D) and hard spheres in 2D and 3D. For those materials, empirical equations were found that describe the discharge characteristics. Softness adds qualitatively new features to the dynamics and to the character of the flow. We report a study of the outflow of soft, practically frictionless hydrogel spheres from a quasi-2D bin. Prominent features are intermittent clogs, peculiar flow fields in the container, and a pronounced dependence of the flow rate and clogging statistics on the container fill height. The latter is a consequence of the ineffectiveness of Janssen's law: the pressure at the bottom of a bin containing hydrogel spheres grows linearly with the fill height.

  4. [Medication errors in Spanish intensive care units].

    Science.gov (United States)

    Merino, P; Martín, M C; Alonso, A; Gutiérrez, I; Alvarez, J; Becerril, F

    2013-01-01

    To estimate the incidence of medication errors in Spanish intensive care units. Post hoc study of the SYREC trial. A longitudinal observational study carried out during 24 hours in patients admitted to the ICU. Spanish intensive care units. Patients admitted to the intensive care unit participating in the SYREC during the period of study. Risk, individual risk, and rate of medication errors. The final study sample consisted of 1017 patients from 79 intensive care units; 591 (58%) were affected by one or more incidents. Of these, 253 (43%) had at least one medication-related incident. The total number of incidents reported was 1424, of which 350 (25%) were medication errors. The risk of suffering at least one incident was 22% (IQR: 8-50%) while the individual risk was 21% (IQR: 8-42%). The medication error rate was 1.13 medication errors per 100 patient-days of stay. Most incidents occurred in the prescription (34%) and administration (28%) phases, 16% resulted in patient harm, and 82% were considered "totally avoidable". Medication errors are among the most frequent types of incidents in critically ill patients, and are more common in the prescription and administration stages. Although most such incidents have no clinical consequences, a significant percentage prove harmful for the patient, and a large proportion are avoidable. Copyright © 2012 Elsevier España, S.L. and SEMICYUC. All rights reserved.

  5. The Sustained Influence of an Error on Future Decision-Making.

    Science.gov (United States)

    Schiffler, Björn C; Bengtsson, Sara L; Lundqvist, Daniel

    2017-01-01

    Post-error slowing (PES) is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants) of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants' response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters' role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  6. The Sustained Influence of an Error on Future Decision-Making

    Directory of Open Access Journals (Sweden)

    Björn C. Schiffler

    2017-06-01

    Full Text Available Post-error slowing (PES is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants’ response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters’ role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  7. Spin waves in the soft layer of exchange-coupled soft/hard bilayers

    Energy Technology Data Exchange (ETDEWEB)

    Xiong, Zheng-min; Ge, Su-qin; Wang, Xi-guang; Li, Zhi-xiong; Xia, Qing-lin; Wang, Dao-wei; Nie, Yao-zhuang; Guo, Guang-hua, E-mail: guogh@mail.csu.edu.cn [School of Physics and Electronics, Central South University, Changsha 410083 (China); Tang, Wei [School of Physics and Electronics, Central South University, Changsha 410083 (China); Suzhou Institute of Nano-tech and Nano-bionics, Chinese Academy of Sciences, Suzhou 215123 (China); Zeng, Zhong-ming [Suzhou Institute of Nano-tech and Nano-bionics, Chinese Academy of Sciences, Suzhou 215123 (China)

    2016-05-15

    The magnetic dynamical properties of the soft layer in exchange-coupled soft/hard bilayers have been investigated numerically using a one-dimensional atomic chain model. The frequencies and spatial profiles of spin wave eigenmodes are calculated during the magnetization reversal process of the soft layer. The spin wave modes exhibit a spatially modulated amplitude, which is especially evident for high-order modes. A dynamic pinning effect of surface magnetic moment is observed. The spin wave eigenfrequency decreases linearly with the increase of the magnetic field in the uniformly magnetized state and increases nonlinearly with field when spiral magnetization configuration is formed in the soft layer.

  8. Spin waves in the soft layer of exchange-coupled soft/hard bilayers

    Directory of Open Access Journals (Sweden)

    Zheng-min Xiong

    2016-05-01

    Full Text Available The magnetic dynamical properties of the soft layer in exchange-coupled soft/hard bilayers have been investigated numerically using a one-dimensional atomic chain model. The frequencies and spatial profiles of spin wave eigenmodes are calculated during the magnetization reversal process of the soft layer. The spin wave modes exhibit a spatially modulated amplitude, which is especially evident for high-order modes. A dynamic pinning effect of surface magnetic moment is observed. The spin wave eigenfrequency decreases linearly with the increase of the magnetic field in the uniformly magnetized state and increases nonlinearly with field when spiral magnetization configuration is formed in the soft layer.

  9. A Smart Soft Sensor Predicting Feedwater Flow Rate

    International Nuclear Information System (INIS)

    Yang, Heon Young; Na, Man Gyun

    2009-01-01

    Since we evaluate thermal nuclear reactor power with secondary system calorimetric calculations based on feedwater flow rate measurements, we need to measure the feedwater flow rate accurately. The Venturi flow meters that are being used to measure the feedwater flow rate in most pressurized water reactors (PWRs) measure the flow rate by developing a differential pressure across a physical flow restriction. The differential pressure is then multiplied by a calibration factor that depends on various flow conditions in order to calculate the feedwater flow rate. The calibration factor is determined by the feedwater temperature and pressure. However, Venturi meters cause a buildup of corrosion products near the orifice of the meter. This fouling increases the measured pressure drop across the meter, thereby causing an overestimation of the feedwater flow rate

  10. SHEAN (Simplified Human Error Analysis code) and automated THERP

    International Nuclear Information System (INIS)

    Wilson, J.R.

    1993-01-01

    One of the most widely used human error analysis tools is THERP (Technique for Human Error Rate Prediction). Unfortunately, this tool has disadvantages. The Nuclear Regulatory Commission, realizing these drawbacks, commissioned Dr. Swain, the author of THERP, to create a simpler, more consistent tool for deriving human error rates. That effort produced the Accident Sequence Evaluation Program Human Reliability Analysis Procedure (ASEP), which is more conservative than THERP, but a valuable screening tool. ASEP involves answering simple questions about the scenario in question, and then looking up the appropriate human error rate in the indicated table (THERP also uses look-up tables, but four times as many). The advantages of ASEP are that human factors expertise is not required, and the training to use the method is minimal. Although not originally envisioned by Dr. Swain, the ASEP approach actually begs to be computerized. That WINCO did, calling the code SHEAN, for Simplified Human Error ANalysis. The code was done in TURBO Basic for IBM or IBM-compatible MS-DOS, for fast execution. WINCO is now in the process of comparing this code against THERP for various scenarios. This report provides a discussion of SHEAN

  11. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  12. Nasal Soft-Tissue Triangle Deformities.

    Science.gov (United States)

    Foda, Hossam M T

    2016-08-01

    The soft-tissue triangle is one of the least areas attended to in rhinoplasty. Any postoperative retraction, notching, or asymmetries of soft triangles can seriously affect the rhinoplasty outcome. A good understanding of the risk factors predisposing to soft triangle deformities is necessary to prevent such problems. The commonest risk factors in our study were the wide vertical domal angle between the lateral and intermediate crura, and the increased length of intermediate crus. Two types of soft triangle grafts were described to prevent and treat soft triangle deformities. The used soft triangle grafts resulted in an excellent long-term aesthetic and functional improvement. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  13. Dynamic deformation of soft soil media: Experimental studies and mathematical modeling

    Science.gov (United States)

    Balandin, V. V.; Bragov, A. M.; Igumnov, L. A.; Konstantinov, A. Yu.; Kotov, V. L.; Lomunov, A. K.

    2015-05-01

    A complex experimental-theoretical approach to studying the problem of high-rate strain of soft soil media is presented. This approach combines the following contemporary methods of dynamical tests: the modified Hopkinson-Kolsky method applied tomedium specimens contained in holders and the method of plane wave shock experiments. The following dynamic characteristics of sand soils are obtained: shock adiabatic curves, bulk compressibility curves, and shear resistance curves. The obtained experimental data are used to study the high-rate strain process in the system of a split pressure bar, and the constitutive relations of Grigoryan's mathematical model of soft soil medium are verified by comparing the results of computational and natural test experiments of impact and penetration.

  14. Synergy of modeling processes in the area of soft and hard modeling

    Directory of Open Access Journals (Sweden)

    Sika Robert

    2017-01-01

    Full Text Available High complexity of production processes results in more frequent use of computer systems for their modeling and simulation. Process modeling helps to find optimal solution, verify some assumptions before implementation and eliminate errors. In practice, modeling of production processes concerns two areas: hard modeling (based on differential equations of mathematical physics and soft (based on existing data. In the paper the possibility of synergistic connection of these two approaches was indicated: it means hard modeling support based on the tools used in soft modeling. It aims at significant reducing the time in order to obtain final results with the use of hard modeling. Some test were carried out in the Calibrate module of NovaFlow&Solid (NF&S simulation system in the frame of thermal analysis (ATAS-cup. The authors tested output values forecasting in NF&S system (solidification time on the basis of variable parameters of the thermal model (heat conduction, specific heat, density. Collected data was used as an input to prepare soft model with the use of MLP (Multi-Layer Perceptron neural network regression model. The approach described above enable to reduce the time of production process modeling with use of hard modeling and should encourage production companies to use it.

  15. Failures without errors: quantification of context in HRA

    International Nuclear Information System (INIS)

    Fujita, Yushi; Hollnagel, Erik

    2004-01-01

    PSA-cum-human reliability analysis (HRA) has traditionally used individual human actions, hence individual 'human errors', as a meaningful unit of analysis. This is inconsistent with the current understanding of accidents, which points out that the notion of 'human error' is ill defined and that adverse events more often are the due to the working conditions than to people. Several HRA approaches, such as ATHEANA and CREAM have recognised this conflict and proposed ways to deal with it. This paper describes an improvement of the basic screening method in CREAM, whereby a rating of the performance conditions can be used to calculate a Mean Failure Rate directly without invoking the notion of human error

  16. Human errors, countermeasures for their prevention and evaluation

    International Nuclear Information System (INIS)

    Kohda, Takehisa; Inoue, Koichi

    1992-01-01

    The accidents originated in human errors have occurred as ever in recent large accidents such as the TMI accident and the Chernobyl accident. The proportion of the accidents originated in human errors is unexpectedly high, therefore, the reliability and safety of hardware are improved hereafter, but the improvement of human reliability cannot be expected. Human errors arise by the difference between the function required for men and the function actually accomplished by men, and the results exert some adverse effect to systems. Human errors are classified into design error, manufacture error, operation error, maintenance error, checkup error and general handling error. In terms of behavior, human errors are classified into forget to do, fail to do, do that must not be done, mistake in order and do at improper time. The factors in human error occurrence are circumstantial factor, personal factor and stress factor. As the method of analyzing and evaluating human errors, system engineering method such as probabilistic risk assessment is used. The technique for human error rate prediction, the method for human cognitive reliability, confusion matrix and SLIM-MAUD are also used. (K.I.)

  17. Holiday fun with soft gluons

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Emissions of soft gluons from energetic particles play an important role in collider processes. While the basic physics of soft emissions is simple, it gives rise to a variety of interesting and intricate phenomena (non-global logs, Glauber phases, super-leading logs, factorization breaking). After an introduction, I will review progress in resummation methods such as Soft-Collinear Effective Theory driven by a better understanding of soft emissions. I will also show some new results for computations of soft-gluon effects in gap-between-jets and isolation-cone cross sections.

  18. Adaptive Channel Estimation based on Soft Information Processing in Broadband Spatial Multiplexing Receivers

    Directory of Open Access Journals (Sweden)

    P. Beinschob

    2010-11-01

    Full Text Available In this paper we present a novel approach in Multiple-Input Multiple Output (MIMO Orthogonal Frequency Division Multiplexing (OFDM channel estimation technique based on a Decision Directed Recursive Least Squares (RLS algorithm in which no pilot symbols need to be integrated in the data after a short initial preamble. The novelty and key concept of the proposed technique is the block-wise causal and anti-causal RLS processing that yields two independent processings of RLS along with the associated decisions. Due to the usage of low density parity check (LDPC channel code, the receiver operates with soft information, which enables us to introduce a new modification of the Turbo principle as well as a simple information combining approach based on approximated aposteriori log-likelihood ratios (LLRs. Although the computational complexity is increased by both of our approaches, the latter is relatively less complex than the former. Simulation results show that these implementations outperform the simple RLS-DDCE algorithm and yield lower bit error rates (BER and more accurate channel estimates.

  19. The Benefits of Soft Sensor and Multi-Rate Control for the Implementation of Wireless Networked Control Systems

    Directory of Open Access Journals (Sweden)

    Raul K. Mansano

    2014-12-01

    Full Text Available Recent advances in wireless networking technology and the proliferation of industrial wireless sensors have led to an increasing interest in using wireless networks for closed loop control. The main advantages of Wireless Networked Control Systems (WNCSs are the reconfigurability, easy commissioning and the possibility of installation in places where cabling is impossible. Despite these advantages, there are two main problems which must be considered for practical implementations of WNCSs. One problem is the sampling period constraint of industrial wireless sensors. This problem is related to the energy cost of the wireless transmission, since the power supply is limited, which precludes the use of these sensors in several closed-loop controls. The other technological concern in WNCS is the energy efficiency of the devices. As the sensors are powered by batteries, the lowest possible consumption is required to extend battery lifetime. As a result, there is a compromise between the sensor sampling period, the sensor battery lifetime and the required control performance for the WNCS. This paper develops a model-based soft sensor to overcome these problems and enable practical implementations of WNCSs. The goal of the soft sensor is generating virtual data allowing an actuation on the process faster than the maximum sampling period available for the wireless sensor. Experimental results have shown the soft sensor is a solution to the sampling period constraint problem of wireless sensors in control applications, enabling the application of industrial wireless sensors in WNCSs. Additionally, our results demonstrated the soft sensor potential for implementing energy efficient WNCS through the battery saving of industrial wireless sensors.

  20. Fixing soft margins

    NARCIS (Netherlands)

    P. Kofman (Paul); A. Vaal, de (Albert); C.G. de Vries (Casper)

    1993-01-01

    textabstractNon-parametric tolerance limits are employed to calculate soft margins such as advocated in Williamson's target zone proposal. In particular, the tradeoff between softness and zone width is quantified. This may be helpful in choosing appropriate margins. Furthermore, it offers