WorldWideScience

Sample records for error control coding

  1. Trellis and turbo coding iterative and graph-based error control coding

    CERN Document Server

    Schlegel, Christian B

    2015-01-01

    This new edition has been extensively revised to reflect the progress in error control coding over the past few years. Over 60% of the material has been completely reworked, and 30% of the material is original. Convolutional, turbo, and low density parity-check (LDPC) coding and polar codes in a unified framework. Advanced research-related developments such as spatial coupling. A focus on algorithmic and implementation aspects of error control coding.

  2. Deciphering the genetic regulatory code using an inverse error control coding framework.

    Energy Technology Data Exchange (ETDEWEB)

    Rintoul, Mark Daniel; May, Elebeoba Eni; Brown, William Michael; Johnston, Anna Marie; Watson, Jean-Paul

    2005-03-01

    We have found that developing a computational framework for reconstructing error control codes for engineered data and ultimately for deciphering genetic regulatory coding sequences is a challenging and uncharted area that will require advances in computational technology for exact solutions. Although exact solutions are desired, computational approaches that yield plausible solutions would be considered sufficient as a proof of concept to the feasibility of reverse engineering error control codes and the possibility of developing a quantitative model for understanding and engineering genetic regulation. Such evidence would help move the idea of reconstructing error control codes for engineered and biological systems from the high risk high payoff realm into the highly probable high payoff domain. Additionally this work will impact biological sensor development and the ability to model and ultimately develop defense mechanisms against bioagents that can be engineered to cause catastrophic damage. Understanding how biological organisms are able to communicate their genetic message efficiently in the presence of noise can improve our current communication protocols, a continuing research interest. Towards this end, project goals include: (1) Develop parameter estimation methods for n for block codes and for n, k, and m for convolutional codes. Use methods to determine error control (EC) code parameters for gene regulatory sequence. (2) Develop an evolutionary computing computational framework for near-optimal solutions to the algebraic code reconstruction problem. Method will be tested on engineered and biological sequences.

  3. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  4. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  5. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  6. Performance of an Error Control System with Turbo Codes in Powerline Communications

    Directory of Open Access Journals (Sweden)

    Balbuena-Campuzano Carlos Alberto

    2014-07-01

    Full Text Available This paper reports the performance of turbo codes as an error control technique in PLC (Powerline Communications data transmissions. For this system, computer simulations are used for modeling data networks based on the model classified in technical literature as indoor, and uses OFDM (Orthogonal Frequency Division Multiplexing as a modulation technique. Taking into account the channel, modulation and turbo codes, we propose a methodology to minimize the bit error rate (BER, as a function of the average received signal noise ratio (SNR.

  7. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  8. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  9. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.

  10. ERROR-CONTROL CODING OF ADS-B MESSAGES FOR IRIDIUM SATELLITES

    Directory of Open Access Journals (Sweden)

    Volodymyr Kharchenko

    2013-12-01

    Full Text Available For modelling of ADS-B messages transmitting on the base of low-orbit satellite constellation Іrіdіum the model of a communication channel “Aircraft - Satellite - Ground Station” was built using MATLAB Sіmulіnk. This model allowed to investigate dependences of the Bit Error Rate on a type of  signal coding/decoding, ratio Eb/N0 and satellite repeater gain

  11. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  12. Biometrics based key management of double random phase encoding scheme using error control codes

    Science.gov (United States)

    Saini, Nirmala; Sinha, Aloka

    2013-08-01

    In this paper, an optical security system has been proposed in which key of the double random phase encoding technique is linked to the biometrics of the user to make it user specific. The error in recognition due to the biometric variation is corrected by encoding the key using the BCH code. A user specific shuffling key is used to increase the separation between genuine and impostor Hamming distance distribution. This shuffling key is then further secured using the RSA public key encryption to enhance the security of the system. XOR operation is performed between the encoded key and the feature vector obtained from the biometrics. The RSA encoded shuffling key and the data obtained from the XOR operation are stored into a token. The main advantage of the present technique is that the key retrieval is possible only in the simultaneous presence of the token and the biometrics of the user which not only authenticates the presence of the original input but also secures the key of the system. Computational experiments showed the effectiveness of the proposed technique for key retrieval in the decryption process by using the live biometrics of the user.

  13. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step

  14. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  15. Group representations, error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  16. Error-correction coding for digital communications

    Science.gov (United States)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  17. Hypoxia in the St. Lawrence Estuary: How a Coding Error Led to the Belief that "Physics Controls Spatial Patterns".

    Directory of Open Access Journals (Sweden)

    Daniel Bourgault

    Full Text Available Two fundamental sign errors were found in a computer code used for studying the oxygen minimum zone (OMZ and hypoxia in the Estuary and Gulf of St. Lawrence. These errors invalidate the conclusions drawn from the model, and call into question a proposed mechanism for generating OMZ that challenges classical understanding. The study in question is being cited frequently, leading the discipline in the wrong direction.

  18. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  19. Statistical mechanics of error-correcting codes

    Science.gov (United States)

    Kabashima, Y.; Saad, D.

    1999-01-01

    We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

  20. Probability of undetected error after decoding for a concatenated coding scheme

    Science.gov (United States)

    Costello, D. J., Jr.; Lin, S.

    1984-01-01

    A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.

  1. Polynomial theory of error correcting codes

    CERN Document Server

    Cancellieri, Giovanni

    2015-01-01

    The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.

  2. EPIC: an Error Propagation/Inquiry Code

    International Nuclear Information System (INIS)

    Baker, A.L.

    1985-01-01

    The use of a computer program EPIC (Error Propagation/Inquiry Code) will be discussed. EPIC calculates the variance of a materials balance closed about a materials balance area (MBA) in a processing plant operated under steady-state conditions. It was designed for use in evaluating the significance of inventory differences in the Department of Energy (DOE) nuclear plants. EPIC rapidly estimates the variance of a materials balance using average plant operating data. The intent is to learn as much as possible about problem areas in a process with simple straightforward calculations assuming a process is running in a steady-state mode. EPIC is designed to be used by plant personnel or others with little computer background. However, the user should be knowledgeable about measurement errors in the system being evaluated and have a limited knowledge of how error terms are combined in error propagation analyses. EPIC contains six variance equations; the appropriate equation is used to calculate the variance at each measurement point. After all of these variances are calculated, the total variance for the MBA is calculated using a simple algebraic sum of variances. The EPIC code runs on any computer that accepts a standard form of the BASIC language. 2 refs., 1 fig., 6 tabs

  3. Correlated Errors in the Surface Code

    Science.gov (United States)

    Lopez, Daniel; Mucciolo, E. R.; Novais, E.

    2012-02-01

    A milestone step into the development of quantum information technology would be the ability to design and operate a reliable quantum memory. The greatest obstacle to create such a device has been decoherence due to the unavoidable interaction between the quantum system and its environment. Quantum Error Correction is therefore an essential ingredient to any quantum computing information device. A great deal of attention has been given to surface codes, since it has very good scaling properties. In this seminar, we discuss the time evolution of a qubit encoded in the logical basis of a surface code. The system is interacting with a bosonic environment at zero temperature. Our results show how much spatial and time correlations can be detrimental to the efficiency of the code.

  4. Enhancement of Unequal Error Protection Properties of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Poulliat Charly

    2007-01-01

    Full Text Available It has been widely recognized in the literature that irregular low-density parity-check (LDPC codes exhibit naturally an unequal error protection (UEP behavior. In this paper, we propose a general method to emphasize and control the UEP properties of LDPC codes. The method is based on a hierarchical optimization of the bit node irregularity profile for each sensitivity class within the codeword by maximizing the average bit node degree while guaranteeing a minimum degree as high as possible. We show that this optimization strategy is efficient, since the codes that we optimize show better UEP capabilities than the codes optimized for the additive white Gaussian noise channel.

  5. Error-Detecting Identification Codes for Algebra Students.

    Science.gov (United States)

    Sutherland, David C.

    1990-01-01

    Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)

  6. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    Science.gov (United States)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  7. Critical lengths of error events in convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    1994-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  8. Critical Lengths of Error Events in Convolutional Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Andersen, Jakob Dahl

    1998-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  9. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  10. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  11. Hypoxia in the St. Lawrence Estuary: How a Coding Error Led to the Belief that “Physics Controls Spatial Patterns”

    NARCIS (Netherlands)

    Bourgault, D.; Cyr, F.

    2015-01-01

    Two fundamental sign errors were found in a computer code used for studying the oxygenminimum zone (OMZ) and hypoxia in the Estuary and Gulf of St. Lawrence. These errorsinvalidate the conclusions drawn from the model, and call into question a proposed mechanismfor generating OMZ that challenges

  12. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  13. Non-binary unitary error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.

    1996-06-01

    Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.

  14. Error floor behavior study of LDPC codes for concatenated codes design

    Science.gov (United States)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  15. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    Science.gov (United States)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  16. Combined Coding And Modulation Using Runlength Limited Error ...

    African Journals Online (AJOL)

    In this paper we propose a Combined Coding and Modulation (CCM) scheme employing RLL/ECCs and MPSK modulation as well as RLL/ECC codes and BFSK/MPSK modulation with a view to optimise on channel bandwidth. The CCM codes and their trellis are designed and their error performances simulated in AWGN ...

  17. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  18. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  19. Efficient decoding of random errors for quantum expander codes

    OpenAIRE

    Fawzi , Omar; Grospellier , Antoine; Leverrier , Anthony

    2017-01-01

    We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Z\\'emor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottes...

  20. Improved Design of Unequal Error Protection LDPC Codes

    Directory of Open Access Journals (Sweden)

    Sandberg Sara

    2010-01-01

    Full Text Available We propose an improved method for designing unequal error protection (UEP low-density parity-check (LDPC codes. The method is based on density evolution. The degree distribution with the best UEP properties is found, under the constraint that the threshold should not exceed the threshold of a non-UEP code plus some threshold offset. For different codeword lengths and different construction algorithms, we search for good threshold offsets for the UEP code design. The choice of the threshold offset is based on the average a posteriori variable node mutual information. Simulations reveal the counter intuitive result that the short-to-medium length codes designed with a suitable threshold offset all outperform the corresponding non-UEP codes in terms of average bit-error rate. The proposed codes are also compared to other UEP-LDPC codes found in the literature.

  1. NP-hardness of decoding quantum error-correction codes

    Science.gov (United States)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  2. NP-hardness of decoding quantum error-correction codes

    International Nuclear Information System (INIS)

    Hsieh, Min-Hsiu; Le Gall, Francois

    2011-01-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  3. Thresholds of surface codes on the general lattice structures suffering biased error and loss

    International Nuclear Information System (INIS)

    Tokunaga, Yuuki; Fujii, Keisuke

    2014-01-01

    A family of surface codes with general lattice structures is proposed. We can control the error tolerances against bit and phase errors asymmetrically by changing the underlying lattice geometries. The surface codes on various lattices are found to be efficient in the sense that their threshold values universally approach the quantum Gilbert-Varshamov bound. We find that the error tolerance of the surface codes depends on the connectivity of the underlying lattices; the error chains on a lattice of lower connectivity are easier to correct. On the other hand, the loss tolerance of the surface codes exhibits an opposite behavior; the logical information on a lattice of higher connectivity has more robustness against qubit loss. As a result, we come upon a fundamental trade-off between error and loss tolerances in the family of surface codes with different lattice geometries

  4. Thresholds of surface codes on the general lattice structures suffering biased error and loss

    Energy Technology Data Exchange (ETDEWEB)

    Tokunaga, Yuuki [NTT Secure Platform Laboratories, NTT Corporation, 3-9-11 Midori-cho, Musashino, Tokyo 180-8585, Japan and Japan Science and Technology Agency, CREST, 5 Sanban-cho, Chiyoda-ku, Tokyo 102-0075 (Japan); Fujii, Keisuke [Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-8531 (Japan)

    2014-12-04

    A family of surface codes with general lattice structures is proposed. We can control the error tolerances against bit and phase errors asymmetrically by changing the underlying lattice geometries. The surface codes on various lattices are found to be efficient in the sense that their threshold values universally approach the quantum Gilbert-Varshamov bound. We find that the error tolerance of the surface codes depends on the connectivity of the underlying lattices; the error chains on a lattice of lower connectivity are easier to correct. On the other hand, the loss tolerance of the surface codes exhibits an opposite behavior; the logical information on a lattice of higher connectivity has more robustness against qubit loss. As a result, we come upon a fundamental trade-off between error and loss tolerances in the family of surface codes with different lattice geometries.

  5. Quantum error-correcting code for ternary logic

    Science.gov (United States)

    Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita

    2018-05-01

    Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.

  6. Unitary Application of the Quantum Error Correction Codes

    International Nuclear Information System (INIS)

    You Bo; Xu Ke; Wu Xiaohua

    2012-01-01

    For applying the perfect code to transmit quantum information over a noise channel, the standard protocol contains four steps: the encoding, the noise channel, the error-correction operation, and the decoding. In present work, we show that this protocol can be simplified. The error-correction operation is not necessary if the decoding is realized by the so-called complete unitary transformation. We also offer a quantum circuit, which can correct the arbitrary single-qubit errors.

  7. The error performance analysis over cyclic redundancy check codes

    Science.gov (United States)

    Yoon, Hee B.

    1991-06-01

    The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.

  8. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  9. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  10. Recognition Errors Control in Biometric Identification Cryptosystems

    Directory of Open Access Journals (Sweden)

    Vladimir Ivanovich Vasilyev

    2015-06-01

    Full Text Available The method of biometric cryptosystem designed on the basis of fuzzy extractor, in which main disadvantages of biometric and cryptographic systems are absent, is considered. The main idea of this work is a control of identity recognition errors with use of fuzzy extractor which operates with Reed – Solomon correcting code. The fingerprint features vector is considered as a biometric user identifier.

  11. SimCommSys: taking the errors out of error-correcting code simulations

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, we present SimCommSys, a simulator of communication systems that we are releasing under an open source license. The core of the project is a set of C + + libraries defining communication system components and a distributed Monte Carlo simulator. Of principal interest is the error-control coding component, where various kinds of binary and non-binary codes are implemented, including turbo, LDPC, repeat-accumulate and Reed–Solomon. The project also contains a number of ready-to-build binaries implementing various stages of the communication system (such as the encoder and decoder, a complete simulator and a system benchmark. Finally, SimCommSys also provides a number of shell and python scripts to encapsulate routine use cases. As long as the required components are already available in SimCommSys, the user may simulate complete communication systems of their own design without any additional programming. The strict separation of development (needed only to implement new components and use (to simulate specific constructions encourages reproducibility of experimental work and reduces the likelihood of error. Following an overview of the framework, we provide some examples of how to use the framework, including the implementation of a simple codec, the specification of communication systems and their simulation.

  12. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  13. Error Control for Network-on-Chip Links

    CERN Document Server

    Fu, Bo

    2012-01-01

    As technology scales into nanoscale regime, it is impossible to guarantee the perfect hardware design. Moreover, if the requirement of 100% correctness in hardware can be relaxed, the cost of manufacturing, verification, and testing will be significantly reduced. Many approaches have been proposed to address the reliability problem of on-chip communications. This book focuses on the use of error control codes (ECCs) to improve on-chip interconnect reliability. Coverage includes detailed description of key issues in NOC error control faced by circuit and system designers, as well as practical error control techniques to minimize the impact of these errors on system performance. Provides a detailed background on the state of error control methods for on-chip interconnects; Describes the use of more complex concatenated codes such as Hamming Product Codes with Type-II HARQ, while emphasizing integration techniques for on-chip interconnect links; Examines energy-efficient techniques for integrating multiple error...

  14. Constrained motion estimation-based error resilient coding for HEVC

    Science.gov (United States)

    Guo, Weihan; Zhang, Yongfei; Li, Bo

    2018-04-01

    Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.

  15. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  16. Runtime Detection of C-Style Errors in UPC Code

    Energy Technology Data Exchange (ETDEWEB)

    Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D

    2011-09-29

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.

  17. Logical error rate scaling of the toric code

    International Nuclear Information System (INIS)

    Watson, Fern H E; Barrett, Sean D

    2014-01-01

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)

  18. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  19. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  20. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    Science.gov (United States)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  1. Energy Efficient Error-Correcting Coding for Wireless Systems

    NARCIS (Netherlands)

    Shao, X.

    2010-01-01

    The wireless channel is a hostile environment. The transmitted signal does not only suffers multi-path fading but also noise and interference from other users of the wireless channel. That causes unreliable communications. To achieve high-quality communications, error correcting coding is required

  2. SHEAN (Simplified Human Error Analysis code) and automated THERP

    International Nuclear Information System (INIS)

    Wilson, J.R.

    1993-01-01

    One of the most widely used human error analysis tools is THERP (Technique for Human Error Rate Prediction). Unfortunately, this tool has disadvantages. The Nuclear Regulatory Commission, realizing these drawbacks, commissioned Dr. Swain, the author of THERP, to create a simpler, more consistent tool for deriving human error rates. That effort produced the Accident Sequence Evaluation Program Human Reliability Analysis Procedure (ASEP), which is more conservative than THERP, but a valuable screening tool. ASEP involves answering simple questions about the scenario in question, and then looking up the appropriate human error rate in the indicated table (THERP also uses look-up tables, but four times as many). The advantages of ASEP are that human factors expertise is not required, and the training to use the method is minimal. Although not originally envisioned by Dr. Swain, the ASEP approach actually begs to be computerized. That WINCO did, calling the code SHEAN, for Simplified Human Error ANalysis. The code was done in TURBO Basic for IBM or IBM-compatible MS-DOS, for fast execution. WINCO is now in the process of comparing this code against THERP for various scenarios. This report provides a discussion of SHEAN

  3. Verification of unfold error estimates in the unfold operator code

    International Nuclear Information System (INIS)

    Fehl, D.L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. copyright 1997 American Institute of Physics

  4. Error Resilience in Current Distributed Video Coding Architectures

    Directory of Open Access Journals (Sweden)

    Tonoli Claudia

    2009-01-01

    Full Text Available In distributed video coding the signal prediction is shifted at the decoder side, giving therefore most of the computational complexity burden at the receiver. Moreover, since no prediction loop exists before transmission, an intrinsic robustness to transmission errors has been claimed. This work evaluates and compares the error resilience performance of two distributed video coding architectures. In particular, we have considered a video codec based on the Stanford architecture (DISCOVER codec and a video codec based on the PRISM architecture. Specifically, an accurate temporal and rate/distortion based evaluation of the effects of the transmission errors for both the considered DVC architectures has been performed and discussed. These approaches have been also compared with H.264/AVC, in both cases of no error protection, and simple FEC error protection. Our evaluations have highlighted in all cases a strong dependence of the behavior of the various codecs to the content of the considered video sequence. In particular, PRISM seems to be particularly well suited for low-motion sequences, whereas DISCOVER provides better performance in the other cases.

  5. New construction of quantum error-avoiding codes via group representation of quantum stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Hailin [Wenzhou University, College of Physics and Electronic Information Engineering, Wenzhou (China); Southeast University, National Mobile Communications Research Laboratory, Nanjing (China); Guilin University of Electronic Technology, Ministry of Education, Key Laboratory of Cognitive Radio and Information Processing, Guilin (China); Zhang, Zhongshan [University of Science and Technology Beijing, Beijing Engineering and Technology Research Center for Convergence Networks and Ubiquitous Services, Beijing (China); Chronopoulos, Anthony Theodore [University of Texas at San Antonio, Department of Computer Science, San Antonio, TX (United States)

    2017-10-15

    In quantum computing, nice error bases as generalization of the Pauli basis were introduced by Knill. These bases are known to be projective representations of finite groups. In this paper, we propose a group representation approach to the study of quantum stabilizer codes. We utilize this approach to define decoherence-free subspaces (DFSs). Unlike previous studies of DFSs, this type of DFSs does not involve any spatial symmetry assumptions on the system-environment interaction. Thus, it can be used to construct quantum error-avoiding codes (QEACs) that are fault tolerant automatically. We also propose a new simple construction of QEACs and subsequently develop several classes of QEACs. Finally, we present numerical simulation results encoding the logical error rate over physical error rate on the fidelity performance of these QEACs. Our study demonstrates that DFSs-based QEACs are capable of providing a generalized and unified framework for error-avoiding methods. (orig.)

  6. Fractal image coding by an approximation of the collage error

    Science.gov (United States)

    Salih, Ismail; Smith, Stanley H.

    1998-12-01

    In fractal image compression an image is coded as a set of contractive transformations, and is guaranteed to generate an approximation to the original image when iteratively applied to any initial image. In this paper we present a method for mapping similar regions within an image by an approximation of the collage error; that is, range blocks can be approximated by a linear combination of domain blocks.

  7. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  8. Quantum secret sharing based on quantum error-correcting codes

    International Nuclear Information System (INIS)

    Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu

    2011-01-01

    Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k − 1, 1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k − 1) threshold scheme. It also takes advantage of classical enhancement of the [2k − 1, 1, k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels. (general)

  9. Upper bounds on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2004-01-01

    We derive upper bounds on the weights of error patterns that can be corrected by a convolutional code with given parameters, or equivalently we give bounds on the code rate for a given set of error patterns. The bounds parallel the Hamming bound for block codes by relating the number of error...

  10. An approach to improving the structure of error-handling code in the linux kernel

    DEFF Research Database (Denmark)

    Saha, Suman; Lawall, Julia; Muller, Gilles

    2011-01-01

    The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where...... an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes....

  11. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  12. An Implementation of Error Minimization Data Transmission in OFDM using Modified Convolutional Code

    Directory of Open Access Journals (Sweden)

    Hendy Briantoro

    2016-04-01

    Full Text Available This paper presents about error minimization in OFDM system. In conventional system, usually using channel coding such as BCH Code or Convolutional Code. But, performance BCH Code or Convolutional Code is not good in implementation of OFDM System. Error bits of OFDM system without channel coding is 5.77%. Then, we used convolutional code with code rate 1/2, it can reduce error bitsonly up to 3.85%. So, we proposed OFDM system with Modified Convolutional Code. In this implementation, we used Software Define Radio (SDR, namely Universal Software Radio Peripheral (USRP NI 2920 as the transmitter and receiver. The result of OFDM system using Modified Convolutional Code with code rate is able recover all character received so can decrease until 0% error bit. Increasing performance of Modified Convolutional Code is about 1 dB in BER of 10-4 from BCH Code and Convolutional Code. So, performance of Modified Convolutional better than BCH Code or Convolutional Code. Keywords: OFDM, BCH Code, Convolutional Code, Modified Convolutional Code, SDR, USRP

  13. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  14. Error correction and degeneracy in surface codes suffering loss

    International Nuclear Information System (INIS)

    Stace, Thomas M.; Barrett, Sean D.

    2010-01-01

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  15. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    Science.gov (United States)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with

  16. Error quantification of the axial nodal diffusion kernel of the DeCART code

    International Nuclear Information System (INIS)

    Cho, J. Y.; Kim, K. S.; Lee, C. C.

    2006-01-01

    This paper is to quantify the transport effects involved in the axial nodal diffusion kernel of the DeCART code. The transport effects are itemized into three effects, the homogenization, the diffusion, and the nodal effects. A five pin model consisting of four fuel pins and one non-fuel pin is demonstrated to quantify the transport effects. The transport effects are analyzed for three problems, the single pin (SP), guide tube (GT) and control rod (CR) problems by replacing the non-fuel pin with the fuel pin, a guide-tube and a control rod pins, respectively. The homogenization and diffusion effects are estimated to be about -4 and -50 pcm for the eigenvalue, and less than 2 % for the node power. The nodal effect on the eigenvalue is evaluated to be about -50 pcm in the SP and GT problems, and +350 pcm in the CR problem. Regarding the node power, this effect induces about a 3 % error in the SP and GT problems, and about a 20 % error in the CR problem. The large power error in the CR problem is due to the plane thickness, and it can be decreased by using the adaptive plane size. From the error quantification, it is concluded that the homogenization and the diffusion effects are not controllable if DeCART maintains the diffusion kernel for the axial solution, but the nodal effect is controllable by introducing the adaptive plane size scheme. (authors)

  17. Codon Distribution in Error-Detecting Circular Codes

    Directory of Open Access Journals (Sweden)

    Elena Fimmel

    2016-03-01

    Full Text Available In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick’s hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C3 and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising result, it is shown that the codons can be separated into very few classes (three, or five, or six with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C3 codes to maximal self-complementary circular codes.

  18. Codon Distribution in Error-Detecting Circular Codes.

    Science.gov (United States)

    Fimmel, Elena; Strüngmann, Lutz

    2016-03-15

    In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick's hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C³ and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C(3) codes to maximal self-complementary circular codes.

  19. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    Science.gov (United States)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  20. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  1. Remote one-qubit information concentration and decoding of operator quantum error-correction codes

    International Nuclear Information System (INIS)

    Hsu Liyi

    2007-01-01

    We propose the general scheme of remote one-qubit information concentration. To achieve the task, the Bell-correlated mixed states are exploited. In addition, the nonremote one-qubit information concentration is equivalent to the decoding of the quantum error-correction code. Here we propose how to decode the stabilizer codes. In particular, the proposed scheme can be used for the operator quantum error-correction codes. The encoded state can be recreated on the errorless qubit, regardless how many bit-flip errors and phase-flip errors have occurred

  2. Methodology for bus layout for topological quantum error correcting codes

    Energy Technology Data Exchange (ETDEWEB)

    Wosnitzka, Martin; Pedrocchi, Fabio L.; DiVincenzo, David P. [RWTH Aachen University, JARA Institute for Quantum Information, Aachen (Germany)

    2016-12-15

    Most quantum computing architectures can be realized as two-dimensional lattices of qubits that interact with each other. We take transmon qubits and transmission line resonators as promising candidates for qubits and couplers; we use them as basic building elements of a quantum code. We then propose a simple framework to determine the optimal experimental layout to realize quantum codes. We show that this engineering optimization problem can be reduced to the solution of standard binary linear programs. While solving such programs is a NP-hard problem, we propose a way to find scalable optimal architectures that require solving the linear program for a restricted number of qubits and couplers. We apply our methods to two celebrated quantum codes, namely the surface code and the Fibonacci code. (orig.)

  3. Quantum mean-field decoding algorithm for error-correcting codes

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Saika, Yohei; Okada, Masato

    2009-01-01

    We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).

  4. Unequal error control scheme for dimmable visible light communication systems

    Science.gov (United States)

    Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan

    2017-01-01

    Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.

  5. JPEG2000-coded image error concealment exploiting convex sets projections.

    Science.gov (United States)

    Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio

    2005-04-01

    Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.

  6. The statistical significance of error probability as determined from decoding simulations for long codes

    Science.gov (United States)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  7. Is a genome a codeword of an error-correcting code?

    Directory of Open Access Journals (Sweden)

    Luzinete C B Faria

    Full Text Available Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.

  8. Equidistant Linear Network Codes with maximal Error-protection from Veronese Varieties

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    2012-01-01

    Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possible altered vectorspace. Ralf Koetter and Frank R. Kschischang in Coding for errors and erasures in random network coding (IEEE Transactions on Information Theory...... construct explicit families of vector-spaces of constant dimension where any pair of distinct vector-spaces are equidistant in the above metric. The parameters of the resulting linear network codes which have maximal error-protection are determined....

  9. Gallager error-correcting codes for binary asymmetric channels

    International Nuclear Information System (INIS)

    Neri, I; Skantzos, N S; Bollé, D

    2008-01-01

    We derive critical noise levels for Gallager codes on asymmetric channels as a function of the input bias and the temperature. Using a statistical mechanics approach we study the space of codewords and the entropy in the various decoding regimes. We further discuss the relation of the convergence of the message passing algorithm with the endogenous property and complexity, characterizing solutions of recursive equations of distributions for cavity fields

  10. Relaxed error control in shape optimization that utilizes remeshing

    CSIR Research Space (South Africa)

    Wilke, DN

    2013-02-01

    Full Text Available Shape optimization strategies based on error indicators usually require strict error control for every computed design during the optimization run. The strict error control serves two purposes. Firstly, it allows for the accurate computation...

  11. Machine-learning-assisted correction of correlated qubit errors in a topological code

    Directory of Open Access Journals (Sweden)

    Paul Baireuther

    2018-01-01

    Full Text Available A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X and phase-flip (Z errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed. The long short-term memory layers of the recurrent neural network maintain their performance over a large number of quantum error correction cycles, making it a practical decoder for forthcoming experimental realizations of the surface code.

  12. Iterative Phase Optimization of Elementary Quantum Error Correcting Codes (Open Access, Publisher’s Version)

    Science.gov (United States)

    2016-08-24

    to the seven-qubit Steane code [29] and also represents the smallest instance of a 2D topological color code [30]. Since the realized quantum error...Quantum Computations on a Topologically Encoded Qubit, Science 345, 302 (2014). [17] M. Cramer, M. B. Plenio, S. T. Flammia, R. Somma, D. Gross, S. D...Memory, J. Math . Phys. (N.Y.) 43, 4452 (2002). [20] B. M. Terhal, Quantum Error Correction for Quantum Memories, Rev. Mod. Phys. 87, 307 (2015). [21] D

  13. Euclidean Geometry Codes, minimum weight words and decodable error-patterns using bit-flipping

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn; Jonsson, Bergtor

    2005-01-01

    We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns.......We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns....

  14. Biometrics encryption combining palmprint with two-layer error correction codes

    Science.gov (United States)

    Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang

    2017-07-01

    To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.

  15. Error Floor Analysis of Coded Slotted ALOHA over Packet Erasure Channels

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Graell i Amat, Alexandre; Brannstrom, F.

    2014-01-01

    We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore ...... identify the most dominant stopping sets for the distributions of practical interest. The derived analytical expressions allow us to accurately predict the error floor at low to moderate channel loads and characterize the unequal error protection inherent in CSA.......We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore...

  16. The Impact of Bar Code Medication Administration Technology on Reported Medication Errors

    Science.gov (United States)

    Holecek, Andrea

    2011-01-01

    The use of bar-code medication administration technology is on the rise in acute care facilities in the United States. The technology is purported to decrease medication errors that occur at the point of administration. How significantly this technology affects actual rate and severity of error is unknown. This descriptive, longitudinal research…

  17. Error-Rate Bounds for Coded PPM on a Poisson Channel

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  18. Confidentiality of 2D Code using Infrared with Cell-level Error Correction

    Directory of Open Access Journals (Sweden)

    Nobuyuki Teraura

    2013-03-01

    Full Text Available Optical information media printed on paper use printing materials to absorb visible light. There is a 2D code, which may be encrypted but also can possibly be copied. Hence, we envisage an information medium that cannot possibly be copied and thereby offers high security. At the surface, the normal 2D code is printed. The inner layers consist of 2D codes printed using a variety of materials, which absorb certain distinct wavelengths, to form a multilayered 2D code. Information can be distributed among the 2D codes forming the inner layers of the multiplex. Additionally, error correction at cell level can be introduced.

  19. A Novel Error Resilient Scheme for Wavelet-based Image Coding Over Packet Networks

    OpenAIRE

    WenZhu Sun; HongYu Wang; DaXing Qian

    2012-01-01

    this paper presents a robust transmission strategy for wavelet based scalable bit stream over packet erasure channel. By taking the advantage of the bit plane coding and the multiple description coding, the proposed strategy adopts layered multiple description coding (LMDC) for the embedded wavelet coders to improve the error resistant capability of the important bit planes in the meaning of D(R) function. Then, the post-compression rate-distortion (PCRD) optimization process is used to impro...

  20. Error Concealment using Neural Networks for Block-Based Image Coding

    Directory of Open Access Journals (Sweden)

    M. Mokos

    2006-06-01

    Full Text Available In this paper, a novel adaptive error concealment (EC algorithm, which lowers the requirements for channel coding, is proposed. It conceals errors in block-based image coding systems by using neural network. In this proposed algorithm, only the intra-frame information is used for reconstruction of the image with separated damaged blocks. The information of pixels surrounding a damaged block is used to recover the errors using the neural network models. Computer simulation results show that the visual quality and the MSE evaluation of a reconstructed image are significantly improved using the proposed EC algorithm. We propose also a simple non-neural approach for comparison.

  1. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes

    Science.gov (United States)

    Elhelw, Amr M.; Badran, Ehab F.

    2015-01-01

    High peak to average power ratio (PAPR) is one of the major problems of OFDM systems. Selected mapping (SLM) is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI) index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI) for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate. PMID:26018504

  2. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes.

    Directory of Open Access Journals (Sweden)

    Amr M Elhelw

    Full Text Available High peak to average power ratio (PAPR is one of the major problems of OFDM systems. Selected mapping (SLM is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate.

  3. Error Recovery Properties and Soft Decoding of Quasi-Arithmetic Codes

    Directory of Open Access Journals (Sweden)

    Christine Guillemot

    2007-08-01

    Full Text Available This paper first introduces a new set of aggregated state models for soft-input decoding of quasi arithmetic (QA codes with a termination constraint. The decoding complexity with these models is linear with the sequence length. The aggregation parameter controls the tradeoff between decoding performance and complexity. It is shown that close-to-optimal decoding performance can be obtained with low values of the aggregation parameter, that is, with a complexity which is significantly reduced with respect to optimal QA bit/symbol models. The choice of the aggregation parameter depends on the synchronization recovery properties of the QA codes. This paper thus describes a method to estimate the probability mass function (PMF of the gain/loss of symbols following a single bit error (i.e., of the difference between the number of encoded and decoded symbols. The entropy of the gain/loss turns out to be the average amount of information conveyed by a length constraint on both the optimal and aggregated state models. This quantity allows us to choose the value of the aggregation parameter that will lead to close-to-optimal decoding performance. It is shown that the optimum position for the length constraint is not the last time instant of the decoding process. This observation leads to the introduction of a new technique for robust decoding of QA codes with redundancy which turns out to outperform techniques based on the concept of forbidden symbol.

  4. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  5. Multichannel Filtered-X Error Coded Affine Projection-Like Algorithm with Evolving Order

    Directory of Open Access Journals (Sweden)

    J. G. Avalos

    2017-01-01

    Full Text Available Affine projection (AP algorithms are commonly used to implement active noise control (ANC systems because they provide fast convergence. However, their high computational complexity can restrict their use in certain practical applications. The Error Coded Affine Projection-Like (ECAP-L algorithm has been proposed to reduce the computational burden while maintaining the speed of AP, but no version of this algorithm has been derived for active noise control, for which the adaptive structures are very different from those of other configurations. In this paper, we introduce a version of the ECAP-L for single-channel and multichannel ANC systems. The proposed algorithm is implemented using the conventional filtered-x scheme, which incurs a lower computational cost than the modified filtered-x structure, especially for multichannel systems. Furthermore, we present an evolutionary method that dynamically decreases the projection order in order to reduce the dimensions of the matrix used in the algorithm’s computations. Experimental results demonstrate that the proposed algorithm yields a convergence speed and a final residual error similar to those of AP algorithms. Moreover, it achieves meaningful computational savings, leading to simpler hardware implementation of real-time ANC applications.

  6. Comparison of Bit Error Rate of Line Codes in NG-PON2

    Directory of Open Access Journals (Sweden)

    Tomas Horvath

    2016-05-01

    Full Text Available This article focuses on simulation and comparison of line codes NRZ (Non Return to Zero, RZ (Return to Zero and Miller’s code for NG-PON2 (Next-Generation Passive Optical Network Stage 2 using. Our article provides solutions with Q-factor, BER (Bit Error Rate, and bandwidth comparison. Line codes are the most important part of communication over the optical fibre. The main role of these codes is digital signal representation. NG-PON2 networks use optical fibres for communication that is the reason why OptSim v5.2 is used for simulation.

  7. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    Science.gov (United States)

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  8. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  9. Fast Erasure and Error decoding of Algebraic Geometry Codes up to the Feng-Rao Bound

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Sakata, S.; Leonard, D.

    1996-01-01

    This paper gives an errata(that is erasure-and error-) decoding algorithm of one-point algebraic geometry codes up to the Feng-Rao designed minimum distance using Sakata's multidimensional generalization of the Berlekamp-massey algorithm and the votin procedure of Feng and Rao.......This paper gives an errata(that is erasure-and error-) decoding algorithm of one-point algebraic geometry codes up to the Feng-Rao designed minimum distance using Sakata's multidimensional generalization of the Berlekamp-massey algorithm and the votin procedure of Feng and Rao....

  10. Exploiting the Error-Correcting Capabilities of Low Density Parity Check Codes in Distributed Video Coding using Optical Flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo

    2012-01-01

    We consider Distributed Video Coding (DVC) in presence of communication errors. First, we present DVC side information generation based on a new method of optical flow driven frame interpolation, where a highly optimized TV-L1 algorithm is used for the flow calculations and combine three flows....... Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three...... flows are combined in one solution. The proposed frame interpolation method consistently outperforms an overlapped block motion compensation scheme and a previous TV-L1 optical flow frame interpolation method with an average PSNR improvement of 1.3 dB and 2.3 dB respectively. For a GOP size of 2...

  11. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  12. What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?

    Science.gov (United States)

    Liebovitch, Larry

    1998-03-01

    The longest term correlations in living systems are the information stored in DNA which reflects the evolutionary history of an organism. The 4 bases (A,T,G,C) encode sequences of amino acids as well as locations of binding sites for proteins that regulate DNA. The fidelity of this important information is maintained by ANALOG error check mechanisms. When a single strand of DNA is replicated the complementary base is inserted in the new strand. Sometimes the wrong base is inserted that sticks out disrupting the phosphate backbone. The new base is not yet methylated, so repair enzymes, that slide along the DNA, can tear out the wrong base and replace it with the right one. The bases in DNA form a sequence of 4 different symbols and so the information is encoded in a DIGITAL form. All the digital codes in our society (ISBN book numbers, UPC product codes, bank account numbers, airline ticket numbers) use error checking code, where some digits are functions of other digits to maintain the fidelity of transmitted informaiton. Does DNA also utitlize a DIGITAL error chekcing code to maintain the fidelity of its information and increase the accuracy of replication? That is, are some bases in DNA functions of other bases upstream or downstream? This raises the interesting mathematical problem: How does one determine whether some symbols in a sequence of symbols are a function of other symbols. It also bears on the issue of determining algorithmic complexity: What is the function that generates the shortest algorithm for reproducing the symbol sequence. The error checking codes most used in our technology are linear block codes. We developed an efficient method to test for the presence of such codes in DNA. We coded the 4 bases as (0,1,2,3) and used Gaussian elimination, modified for modulus 4, to test if some bases are linear combinations of other bases. We used this method to analyze the base sequence in the genes from the lac operon and cytochrome C. We did not find

  13. Error-controlled adaptive finite elements in solid mechanics

    National Research Council Canada - National Science Library

    Stein, Erwin; Ramm, E

    2003-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Error-controlled Adaptive Finite-element-methods . . . . . . . . . . . . Missing Features and Properties of Today's General Purpose FE Programs for Structural...

  14. Measuring and detecting errors in occupational coding: an analysis of SHARE data

    NARCIS (Netherlands)

    Belloni, M.; Brugiavini, A.; Meschi, E.; Tijdens, K.

    2016-01-01

    This article studies coding errors in occupational data, as the quality of this data is important but often neglected. In particular, we recoded open-ended questions on occupation for last and current job in the Dutch sample of the “Survey of Health, Ageing and Retirement in Europe” (SHARE) using a

  15. Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping

    NARCIS (Netherlands)

    Á. Piedrafita (Álvaro); J.M. Renes (Joseph)

    2017-01-01

    textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve

  16. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    Czech Academy of Sciences Publication Activity Database

    Gál, A.; Hansen, A. K.; Koucký, Michal; Pudlák, Pavel; Viola, E.

    2013-01-01

    Roč. 59, č. 10 (2013), s. 6611-6627 ISSN 0018-9448 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : bounded-depth circuits * error-correcting codes * hashing Subject RIV: BA - General Mathematics Impact factor: 2.650, year: 2013 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6578188

  17. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    Czech Academy of Sciences Publication Activity Database

    Gál, A.; Hansen, A. K.; Koucký, Michal; Pudlák, Pavel; Viola, E.

    2013-01-01

    Roč. 59, č. 10 (2013), s. 6611-6627 ISSN 0018-9448 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : bounded-depth circuits * error -correcting codes * hashing Subject RIV: BA - General Mathematics Impact factor: 2.650, year: 2013 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6578188

  18. Links between N-modular redundancy and the theory of error-correcting codes

    Science.gov (United States)

    Bobin, V.; Whitaker, S.; Maki, G.

    1992-01-01

    N-Modular Redundancy (NMR) is one of the best known fault tolerance techniques. Replication of a module to achieve fault tolerance is in some ways analogous to the use of a repetition code where an information symbol is replicated as parity symbols in a codeword. Linear Error-Correcting Codes (ECC) use linear combinations of information symbols as parity symbols which are used to generate syndromes for error patterns. These observations indicate links between the theory of ECC and the use of hardware redundancy for fault tolerance. In this paper, we explore some of these links and show examples of NMR systems where identification of good and failed elements is accomplished in a manner similar to error correction using linear ECC's.

  19. Coded ultrasonic remote control without batteries

    International Nuclear Information System (INIS)

    Gerhardy, C; Burlage, K; Schomburg, W K

    2009-01-01

    A concept for battery-less remote controls has been developed based on mechanically actuated beams and micro whistles generating ultrasound signals. These signals need to be frequency or time coded to increase the number of signals which can be distinguished from each other and environmental ultrasound. Several designs for generating coded ultrasonic signals have been investigated

  20. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  1. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    DEFF Research Database (Denmark)

    Gal, A.; Hansen, Kristoffer Arnsfelt; Koucky, Michal

    2013-01-01

    We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n)→{0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: 1) if d=2, then w=Θ(n (lgn/lglgn)2); 2) if d=3, then w...

  2. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    DEFF Research Database (Denmark)

    Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal

    2012-01-01

    We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n) -> {0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w = Θ(n ({log n/ log log n})2). (2) If d...

  3. Facial motion parameter estimation and error criteria in model-based image coding

    Science.gov (United States)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  4. Adaptive Wavelet Coding Applied in a Wireless Control System.

    Science.gov (United States)

    Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O

    2017-12-13

    Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  5. Adaptive Wavelet Coding Applied in a Wireless Control System

    Directory of Open Access Journals (Sweden)

    Felipe O. S. Gama

    2017-12-01

    Full Text Available Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  6. Internal quality control of RIA with Tonks error calculation method

    International Nuclear Information System (INIS)

    Chen Xiaodong

    1996-01-01

    According to the methodology feature of RIA, an internal quality control chart with Tonks error calculation method which is suitable for RIA is designed. The quality control chart defines the value of the allowance error with normal reference range. The method has the simplicity of its performance and directly perceived through the senses. Taking the example of determining T 3 and T 4 , the calculation of allowance error, drawing of quality control chart and the analysis of result are introduced

  7. Electrical, instrumentation, and control codes and standards

    International Nuclear Information System (INIS)

    Kranning, A.N.

    1978-01-01

    During recent years numerous documents in the form of codes and standards have been developed and published to provide design, fabrication and construction rules and criteria applicable to instrumentation, control and power distribution facilities for nuclear power plants. The contents of this LTR were prepared by NUS Corporation under Subcontract K5108 and provide a consolidated index and listing of the documents selected for their application to procurement of materials and design of modifications and new construction at the LOFT facility. These codes and standards should be applied together with the National Electrical Code, the ID Engineering Standards and LOFT Specifications to all LOFT instrument and electrical design activities

  8. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  9. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Directory of Open Access Journals (Sweden)

    Mohammad Abdur Razzaque

    2014-12-01

    Full Text Available Wireless body sensor networks (WBSNs for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS, in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network’s QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  10. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2014-12-29

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  11. RODMOD: a code for control rod positioning

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.

    1978-11-01

    The report documents a computer code which has been implemented to position control rods according to a prescribed schedule during the calculation of a reactor history. Control rods may be represented explicitly with or without internal black absorber conditions in selected energy groups, or fractional insertion may be done, or both, in a problem. There is provision for control rod follower, movement of materials through a series of zones in a closed loop, and shutdown rod insertion and subsequent removal to allow the reactor history calculation to be continued. This code is incorporated in the system containing the VENTURE diffusion theory neutronics and the BURNER exposure codes for routine use. The implemented automated procedures cause the prescribed control rod insertion schedule to be applied without the access of additional user input data during the calculation of a reactor operating history

  12. An audit of the nature and impact of clinical coding subjectivity variability and error in otolaryngology.

    Science.gov (United States)

    Nouraei, S A R; Hudovsky, A; Virk, J S; Chatrath, P; Sandhu, G S

    2013-12-01

    To audit the accuracy of clinical coding in otolaryngology, assess the effectiveness of previously implemented interventions, and determine ways in which it can be further improved. Prospective clinician-auditor multidisciplinary audit of clinical coding accuracy. Elective and emergency ENT admissions and day-case activity. Concordance between initial coding and the clinician-auditor multi-disciplinary teams (MDT) coding in respect of primary and secondary diagnoses and procedures, health resource groupings health resource groupings (HRGs) and tariffs. The audit of 3131 randomly selected otolaryngology patients between 2010 and 2012 resulted in 420 instances of change to the primary diagnosis (13%) and 417 changes to the primary procedure (13%). In 1420 cases (44%), there was at least one change to the initial coding and 514 (16%) health resource groupings changed. There was an income variance of £343,169 or £109.46 per patient. The highest rates of health resource groupings change were observed in head and neck surgery and in particular skull-based surgery, laryngology and within that tracheostomy, and emergency admissions, and specially, epistaxis management. A randomly selected sample of 235 patients from the audit were subjected to a second audit by a second clinician-auditor multi-disciplinary team. There were 12 further health resource groupings changes (5%) and at least one further coding change occurred in 57 patients (24%). These changes were significantly lower than those observed in the pre-audit sample, but were also significantly greater than zero. Asking surgeons to 'code in theatre' and applying these codes without further quality assurance to activity resulted in an health resource groupings error rate of 45%. The full audit sample was regrouped under health resource groupings 3.5 and was compared with a previous audit of 1250 patients performed between 2007 and 2008. This comparison showed a reduction in the baseline rate of health resource

  13. Error analysis of supercritical water correlations using ATHLET system code under DHT conditions

    Energy Technology Data Exchange (ETDEWEB)

    Samuel, J., E-mail: jeffrey.samuel@uoit.ca [Univ. of Ontario Inst. of Tech., Oshawa, ON (Canada)

    2014-07-01

    The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is used for analysis of anticipated and abnormal plant transients, including safety analysis of Light Water Reactors (LWRs) and Russian Graphite-Moderated High Power Channel-type Reactors (RBMKs). The range of applicability of ATHLET has been extended to supercritical water by updating the fluid-and transport-properties packages, thus enabling the code to the used in analysis of SuperCritical Water-cooled Reactors (SCWRs). Several well-known heat-transfer correlations for supercritical fluids were added to the ATHLET code and a numerical model was created to represent an experimental test section. In this work, the error in the Heat Transfer Coefficient (HTC) calculation by the ATHLET model is studied along with the ability of the various correlations to predict different heat transfer regimes. (author)

  14. Potts glass reflection of the decoding threshold for qudit quantum error correcting codes

    Science.gov (United States)

    Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.

    We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).

  15. Generalized perturbation theory error control within PWR core-loading pattern optimization

    International Nuclear Information System (INIS)

    Imbriani, J.S.; Turinsky, P.J.; Kropaczek, D.J.

    1995-01-01

    The fuel management optimization code FORMOSA-P has been developed to determine the family of near-optimum loading patterns for PWR reactors. The code couples the optimization technique of simulated annealing (SA) with a generalized perturbation theory (GPT) model for evaluating core physics characteristics. To ensure the accuracy of the GPT predictions, as well as to maximize the efficient of the SA search, a GPT error control method has been developed

  16. Error Control in Distributed Node Self-Localization

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2008-03-01

    Full Text Available Location information of nodes in an ad hoc sensor network is essential to many tasks such as routing, cooperative sensing, and service delivery. Distributed node self-localization is lightweight and requires little communication overhead, but often suffers from the adverse effects of error propagation. Unlike other localization papers which focus on designing elaborate localization algorithms, this paper takes a different perspective, focusing on the error propagation problem, addressing questions such as where localization error comes from and how it propagates from node to node. To prevent error from propagating and accumulating, we develop an error-control mechanism based on characterization of node uncertainties and discrimination between neighboring nodes. The error-control mechanism uses only local knowledge and is fully decentralized. Simulation results have shown that the active selection strategy significantly mitigates the effect of error propagation for both range and directional sensors. It greatly improves localization accuracy and robustness.

  17. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    Science.gov (United States)

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  18. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)

    2015-06-23

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.

  19. Instanton-based techniques for analysis and reduction of error floors of LDPC codes

    International Nuclear Information System (INIS)

    Chertkov, Michael; Chilappagari, Shashi K.; Stepanov, Mikhail G.; Vasic, Bane

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  20. Instanton-based techniques for analysis and reduction of error floor of LDPC codes

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi K [Los Alamos National Laboratory; Stepanov, Mikhail G [Los Alamos National Laboratory; Vasic, Bane [SENIOR MEMBER, IEEE

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  1. Designing an efficient LT-code with unequal error protection for image transmission

    Science.gov (United States)

    S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.

    2015-10-01

    The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression

  2. Inherent Error in Asynchronous Digital Flight Controls.

    Science.gov (United States)

    1980-02-01

    operation will be eliminated. If T* is close to T, the inherent error (eA) is a small value. Then the deficiency of the basic model, which is de... tK2 at m ~VCONTR0RKIPPIL I~+ R tKT 2 TKT 5 ~I G E 1 i V OTN TRIL 2-REDN HNE IjT e UT 3 ~49__I 4.) 4 -4 - 4.-4 U 4.) k-4E-- Iz E-4 P E-44)-4. 4.)l 1s...indicate the channel failure. To reduce this deficiency , the new model computes a tolerance value equal to the maximum steady-state sample covariance of the

  3. Forward Error Correcting Codes for 100 Gbit/s Optical Communication Systems

    DEFF Research Database (Denmark)

    Li, Bomin

    , a denser WDM grid changes the shape of the BER curve based on the analysis of the experimental results, which requires a stronger FEC code. Furthermore, a proof-of-the-concept hardware implementation is presented. The tradeoff between the code length, the CG and the complexity requires more consideration......-complexity low-power-consumption FEC hardware implementation plays an important role in the next generation energy efficient networks. Thirdly, a joint research is required for FEC integrated applications as the error distribution in channels relies on many factors such as non-linearity in long distance optical...... and their associated experimental demonstration and hardware implementation. The demonstrated high CG, flexibility, robustness and scalability reveal the important role of FEC techniques in the next generation high-speed, high-capacity, high performance and energy-efficient fiber-optic data transmission networks....

  4. Inclusive bit error rate analysis for coherent optical code-division multiple-access system

    Science.gov (United States)

    Katz, Gilad; Sadot, Dan

    2002-06-01

    Inclusive noise and bit error rate (BER) analysis for optical code-division multiplexing (OCDM) using coherence techniques is presented. The analysis contains crosstalk calculation of the mutual field variance for different number of users. It is shown that the crosstalk noise depends deeply on the receiver integration time, the laser coherence time, and the number of users. In addition, analytical results of the power fluctuation at the received channel due to the data modulation at the rejected channels are presented. The analysis also includes amplified spontaneous emission (ASE)-related noise effects of in-line amplifiers in a long-distance communication link.

  5. Design of power controller in CDMA system with power and SIR error minimization

    Institute of Scientific and Technical Information of China (English)

    Shulan KONG; Huanshui ZHANG; Zhaosheng ZHANG; Hongxia WANG

    2007-01-01

    In this paper, an uplink power control problem is considered for code division multiple access (CDMA) systems. A distributed algorithm is proposed based on linear quadratic optimal control theory. The proposed scheme minimizes the sum of the power and the error of signal-to-interference ratio (SIR). A power controller is designed by constructing an optimization problem of a stochastic linear quadratic type in Krein space and solving a Kalman filter problem.

  6. A neural fuzzy controller learning by fuzzy error propagation

    Science.gov (United States)

    Nauck, Detlef; Kruse, Rudolf

    1992-01-01

    In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.

  7. Impact of documentation errors on accuracy of cause of death coding in an educational hospital in Southern Iran.

    Science.gov (United States)

    Haghighi, Mohammad Hosein Hayavi; Dehghani, Mohammad; Teshnizi, Saeid Hoseini; Mahmoodi, Hamid

    2014-01-01

    Accurate cause of death coding leads to organised and usable death information but there are some factors that influence documentation on death certificates and therefore affect the coding. We reviewed the role of documentation errors on the accuracy of death coding at Shahid Mohammadi Hospital (SMH), Bandar Abbas, Iran. We studied the death certificates of all deceased patients in SMH from October 2010 to March 2011. Researchers determined and coded the underlying cause of death on the death certificates according to the guidelines issued by the World Health Organization in Volume 2 of the International Statistical Classification of Diseases and Health Related Problems-10th revision (ICD-10). Necessary ICD coding rules (such as the General Principle, Rules 1-3, the modification rules and other instructions about death coding) were applied to select the underlying cause of death on each certificate. Demographic details and documentation errors were then extracted. Data were analysed with descriptive statistics and chi square tests. The accuracy rate of causes of death coding was 51.7%, demonstrating a statistically significant relationship (p=.001) with major errors but not such a relationship with minor errors. Factors that result in poor quality of Cause of Death coding in SMH are lack of coder training, documentation errors and the undesirable structure of death certificates.

  8. Quantum states and their marginals. From multipartite entanglement to quantum error-correcting codes

    International Nuclear Information System (INIS)

    Huber, Felix Michael

    2017-01-01

    At the heart of the curious phenomenon of quantum entanglement lies the relation between the whole and its parts. In my thesis, I explore different aspects of this theme in the multipartite setting by drawing connections to concepts from statistics, graph theory, and quantum error-correcting codes: first, I address the case when joint quantum states are determined by their few-body parts and by Jaynes' maximum entropy principle. This can be seen as an extension of the notion of entanglement, with less complex states already being determined by their few-body marginals. Second, I address the conditions for certain highly entangled multipartite states to exist. In particular, I present the solution of a long-standing open problem concerning the existence of an absolutely maximally entangled state on seven qubits. This sheds light on the algebraic properties of pure quantum states, and on the conditions that constrain the sharing of entanglement amongst multiple particles. Third, I investigate Ulam's graph reconstruction problems in the quantum setting, and obtain legitimacy conditions of a set of states to be the reductions of a joint graph state. Lastly, I apply and extend the weight enumerator machinery from quantum error correction to investigate the existence of codes and highly entangled states in higher dimensions. This clarifies the physical interpretation of the weight enumerators and of the quantum MacWilliams identity, leading to novel applications in multipartite entanglement.

  9. Reduced phase error through optimized control of a superconducting qubit

    International Nuclear Information System (INIS)

    Lucero, Erik; Kelly, Julian; Bialczak, Radoslaw C.; Lenander, Mike; Mariantoni, Matteo; Neeley, Matthew; O'Connell, A. D.; Sank, Daniel; Wang, H.; Weides, Martin; Wenner, James; Cleland, A. N.; Martinis, John M.; Yamamoto, Tsuyoshi

    2010-01-01

    Minimizing phase and other errors in experimental quantum gates allows higher fidelity quantum processing. To quantify and correct for phase errors, in particular, we have developed an experimental metrology - amplified phase error (APE) pulses - that amplifies and helps identify phase errors in general multilevel qubit architectures. In order to correct for both phase and amplitude errors specific to virtual transitions and leakage outside of the qubit manifold, we implement 'half derivative', an experimental simplification of derivative reduction by adiabatic gate (DRAG) control theory. The phase errors are lowered by about a factor of five using this method to ∼1.6 deg. per gate, and can be tuned to zero. Leakage outside the qubit manifold, to the qubit |2> state, is also reduced to ∼10 -4 for 20% faster gates.

  10. Composite Gauss-Legendre Quadrature with Error Control

    Science.gov (United States)

    Prentice, J. S. C.

    2011-01-01

    We describe composite Gauss-Legendre quadrature for determining definite integrals, including a means of controlling the approximation error. We compare the form and performance of the algorithm with standard Newton-Cotes quadrature. (Contains 1 table.)

  11. Data mining of air traffic control operational errors

    Science.gov (United States)

    2006-01-01

    In this paper we present the results of : applying data mining techniques to identify patterns and : anomalies in air traffic control operational errors (OEs). : Reducing the OE rate is of high importance and remains a : challenge in the aviation saf...

  12. The impact of transmission errors on progressive 720 lines HDTV coded with H.264

    Science.gov (United States)

    Brunnström, Kjell; Stålenbring, Daniel; Pettersson, Martin; Gustafsson, Jörgen

    2010-02-01

    TV sent over the networks based on the Internet Protocol i.e IPTV is moving towards high definition (HDTV). There has been quite a lot of work on how the HDTV is affected by different codecs and bitrates, but the impact of transmission errors over IP-networks have been less studied. The study was focusing on H.264 encoded 1280x720 progressive HDTV format and was comparing three different concealment methods for different packet loss rates. One is included in a propriety decoder, one is part of FFMPEG and different length of freezing. The target is to simulate what typically IPTV settop-boxes will do when encountering packet loss. Another aim is to study whether the presentation upscaled on the full HDTV screen or presented pixel mapped in a smaller area in the center of the sceen would have an effect on the quality. The results show that there were differences between the two packet loss concealment methods in FFMPEG and in the propriety codec. Freezing seemed to have similar effect as been reported before. For low rates of transmission errors the coding impairments has impact on the quality, but for higher degree of transmission errors these does not affect the quality, since they become overshadowed by transmission error. An interesting effect where the higher bitrate videos goes from having higher quality for lower degree of packet loss, to having lower quality than the lower bitrate video at higher packet loss, was discovered. The different way of presenting the video i.e. upscaled or not-upscaled was significant on the 95% level, but just about.

  13. HyDEn: A Hybrid Steganocryptographic Approach for Data Encryption Using Randomized Error-Correcting DNA Codes

    Directory of Open Access Journals (Sweden)

    Dan Tulpan

    2013-01-01

    Full Text Available This paper presents a novel hybrid DNA encryption (HyDEn approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.

  14. Controlling qubit drift by recycling error correction syndromes

    Science.gov (United States)

    Blume-Kohout, Robin

    2015-03-01

    Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE

  15. Error control coding schemes in digital systems | Kwaha | Global ...

    African Journals Online (AJOL)

    Global Journal of Pure and Applied Sciences. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 8, No 3 (2002) >. Log in or Register to get access to full text downloads.

  16. Error control coding schemes in digital systems | Kwaha | Global ...

    African Journals Online (AJOL)

    (a) The continuous distribution of current density model reproduces the altitude distribution parameters of EEJ current density very well, (b) the altitude distribution parameters of EEJ current density in India and Peru are not significantly different and (c) The altitude distribution parameters of EEJ current density from rockets ...

  17. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullan, D.E.

    1986-01-01

    We investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. We consider the 2.0347 to 3.3546 keV energy region for 238 U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems. (author)

  18. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullen, D.E.

    1985-01-01

    The authors investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. They consider the 2.0347 to 3.3546 keV energy region for /sup 238/U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems

  19. On the decoding process in ternary error-correcting output codes.

    Science.gov (United States)

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia

    2010-01-01

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

  20. When soft controls get slippery: User interfaces and human error

    International Nuclear Information System (INIS)

    Stubler, W.F.; O'Hara, J.M.

    1998-01-01

    Many types of products and systems that have traditionally featured physical control devices are now being designed with soft controls--input formats appearing on computer-based display devices and operated by a variety of input devices. A review of complex human-machine systems found that soft controls are particularly prone to some types of errors and may affect overall system performance and safety. This paper discusses the application of design approaches for reducing the likelihood of these errors and for enhancing usability, user satisfaction, and system performance and safety

  1. Servo control booster system for minimizing following error

    Science.gov (United States)

    Wise, W.L.

    1979-07-26

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  2. Experimental Evaluation of a Mixed Controller That Amplifies Spatial Errors and Reduces Timing Errors

    Directory of Open Access Journals (Sweden)

    Laura Marchal-Crespo

    2017-06-01

    Full Text Available Research on motor learning suggests that training with haptic guidance enhances learning of the timing components of motor tasks, whereas error amplification is better for learning the spatial components. We present a novel mixed guidance controller that combines haptic guidance and error amplification to simultaneously promote learning of the timing and spatial components of complex motor tasks. The controller is realized using a force field around the desired position. This force field has a stable manifold tangential to the trajectory that guides subjects in velocity-related aspects. The force field has an unstable manifold perpendicular to the trajectory, which amplifies the perpendicular (spatial error. We also designed a controller that applies randomly varying, unpredictable disturbing forces to enhance the subjects’ active participation by pushing them away from their “comfort zone.” We conducted an experiment with thirty-two healthy subjects to evaluate the impact of four different training strategies on motor skill learning and self-reported motivation: (i No haptics, (ii mixed guidance, (iii perpendicular error amplification and tangential haptic guidance provided in sequential order, and (iv randomly varying disturbing forces. Subjects trained two motor tasks using ARMin IV, a robotic exoskeleton for upper limb rehabilitation: follow circles with an ellipsoidal speed profile, and move along a 3D line following a complex speed profile. Mixed guidance showed no detectable learning advantages over the other groups. Results suggest that the effectiveness of the training strategies depends on the subjects’ initial skill level. Mixed guidance seemed to benefit subjects who performed the circle task with smaller errors during baseline (i.e., initially more skilled subjects, while training with no haptics was more beneficial for subjects who created larger errors (i.e., less skilled subjects. Therefore, perhaps the high functional

  3. Smart photodetector arrays for error control in page-oriented optical memory

    Science.gov (United States)

    Schaffer, Maureen Elizabeth

    1998-12-01

    Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data

  4. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    J. Lang (Jens); J.G. Verwer (Jan)

    2007-01-01

    textabstractThis paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based onthe adjoint method combined with a small sample statistical initialization and the classical approach

  5. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    Lang, J.; Verwer, J.G.

    2007-01-01

    Abstract. This paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based on the adjoint method combined with a small sample statistical initialization and the classical approach

  6. Analysis of error floor of LDPC codes under LP decoding over the BSC

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi [UNIV OF AZ; Vasic, Bane [UNIV OF AZ; Stepanov, Mikhail [UNIV OF AZ

    2009-01-01

    We consider linear programming (LP) decoding of a fixed low-density parity-check (LDPC) code over the binary symmetric channel (BSC). The LP decoder fails when it outputs a pseudo-codeword which is not a codeword. We propose an efficient algorithm termed the instanton search algorithm (ISA) which, given a random input, generates a set of flips called the BSC-instanton and prove that: (a) the LP decoder fails for any set of flips with support vector including an instanton; (b) for any input, the algorithm outputs an instanton in the number of steps upper-bounded by twice the number of flips in the input. We obtain the number of unique instantons of different sizes by running the ISA sufficient number of times. We then use the instanton statistics to predict the performance of the LP decoding over the BSC in the error floor region. We also propose an efficient semi-analytical method to predict the performance of LP decoding over a large range of transition probabilities of the BSC.

  7. Base data for looking-up tables of calculation errors in JACS code system

    International Nuclear Information System (INIS)

    Murazaki, Minoru; Okuno, Hiroshi

    1999-03-01

    The report intends to clarify the base data for the looking-up tables of calculation errors cited in 'Nuclear Criticality Safety Handbook'. The tables were obtained by classifying the benchmarks made by JACS code system, and there are two kinds: One kind is for fuel systems in general geometry with a reflected and another kind is for fuel systems specific to simple geometry with a reflector. Benchmark systems were further categorized into eight groups according to the fuel configuration: homogeneous or heterogeneous; and fuel kind: uranium, plutonium and their mixtures, etc. The base data for fuel systems in general geometry with a reflected are summarized in this report for the first time. The base data for fuel systems in simple geometry with a reflector were summarized in a technical report published in 1987. However, the data in a group named homogeneous low-enriched uranium were further selected out later by the working group for making the Nuclear Criticality Safety Handbook. This report includes the selection. As a project has been organized by OECD/NEA for evaluation of criticality safety benchmark experiments, the results are also described. (author)

  8. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  9. A new controller for the JET error field correction coils

    International Nuclear Information System (INIS)

    Zanotto, L.; Sartori, F.; Bigi, M.; Piccolo, F.; De Benedetti, M.

    2005-01-01

    This paper describes the hardware and the software structure of a new controller for the JET error field correction coils (EFCC) system, a set of ex-vessel coils that recently replaced the internal saddle coils. The EFCC controller has been developed on a conventional VME hardware platform using a new software framework, recently designed for real-time applications at JET, and replaces the old disruption feedback controller increasing the flexibility and the optimization of the system. The use of conventional hardware has required a particular effort in designing the software part in order to meet the specifications. The peculiarities of the new controller will be highlighted, such as its very useful trigger logic interface, which allows in principle exploring various error field experiment scenarios

  10. Differing Air Traffic Controller Responses to Similar Trajectory Prediction Errors

    Science.gov (United States)

    Mercer, Joey; Hunt-Espinosa, Sarah; Bienert, Nancy; Laraway, Sean

    2016-01-01

    A Human-In-The-Loop simulation was conducted in January of 2013 in the Airspace Operations Laboratory at NASA's Ames Research Center. The simulation airspace included two en route sectors feeding the northwest corner of Atlanta's Terminal Radar Approach Control. The focus of this paper is on how uncertainties in the study's trajectory predictions impacted the controllers ability to perform their duties. Of particular interest is how the controllers interacted with the delay information displayed in the meter list and data block while managing the arrival flows. Due to wind forecasts with 30-knot over-predictions and 30-knot under-predictions, delay value computations included errors of similar magnitude, albeit in opposite directions. However, when performing their duties in the presence of these errors, did the controllers issue clearances of similar magnitude, albeit in opposite directions?

  11. Lithographically encoded polymer microtaggant using high-capacity and error-correctable QR code for anti-counterfeiting of drugs.

    Science.gov (United States)

    Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook

    2012-11-20

    A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. ERROR DETECTION BY ANTICIPATION FOR VISION-BASED CONTROL

    Directory of Open Access Journals (Sweden)

    A ZAATRI

    2001-06-01

    Full Text Available A vision-based control system has been developed.  It enables a human operator to remotely direct a robot, equipped with a camera, towards targets in 3D space by simply pointing on their images with a pointing device. This paper presents an anticipatory system, which has been designed for improving the safety and the effectiveness of the vision-based commands. It simulates these commands in a virtual environment. It attempts to detect hard contacts that may occur between the robot and its environment, which can be caused by machine errors or operator errors as well.

  13. Recent advances in coding theory for near error-free communications

    Science.gov (United States)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  14. Fast Erasure-and error decoding of algebraic geometry codes up to the Feng-Rao bound

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Sakata, Shojiro

    1998-01-01

    This correspondence gives an errata (that is erasure-and error-) decoding algorithm of one-point algebraic-geometry codes up to the Feng-Rao designed minimum distance using Sakata's multidimensional generalization of the Berlekamp-Massey algorithm and the voting procedure of Feng and Rao....

  15. Decoding linear error-correcting codes up to half the minimum distance with Gröbner bases

    NARCIS (Netherlands)

    Bulygin, S.; Pellikaan, G.R.; Sala, M.; Mora, T.; Perret, L.; Sakata, S.; Traverso, C.

    2009-01-01

    In this short note we show how one can decode linear error-correcting codes up to half the minimum distance via solving a system of polynomial equations over a finite field. We also explicitly present the reduced Gröbner basis for the system considered.

  16. Association Between Workarounds and Medication Administration Errors in Bar Code-Assisted Medication Administration : Protocol of a Multicenter Study

    NARCIS (Netherlands)

    van der Veen, Willem; van den Bemt, Patricia Mla; Bijlsma, Maarten; de Gier, Han J; Taxis, Katja

    2017-01-01

    BACKGROUND: Information technology-based methods such as bar code-assisted medication administration (BCMA) systems have the potential to reduce medication administration errors (MAEs) in hospitalized patients. In practice, however, systems are often not used as intended, leading to workarounds.

  17. Code Samples Used for Complexity and Control

    Science.gov (United States)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  18. Association between workarounds and medication administration errors in bar-code-assisted medication administration in hospitals

    NARCIS (Netherlands)

    van der Veen, Willem; van den Bemt, Patricia M L A; Wouters, Hans; Bates, David W; Twisk, Jos W R; de Gier, Johan J; Taxis, Katja

    Objective: To study the association of workarounds with medication administration errors using barcode-assisted medication administration (BCMA), and to determine the frequency and types of workarounds and medication administration errors. Materials and Methods: A prospective observational study in

  19. Quaternion error-based optimal control applied to pinpoint landing

    Science.gov (United States)

    Ghiglino, Pablo

    Accurate control techniques for pinpoint planetary landing - i.e., the goal of achieving landing errors in the order of 100m for unmanned missions - is a complex problem that have been tackled in different ways in the available literature. Among other challenges, this kind of control is also affected by the well known trade-off in UAV control that for complex underlying models the control is sub-optimal, while optimal control is applied to simplifed models. The goal of this research has been the development new control algorithms that would be able to tackle these challenges and the result are two novel optimal control algorithms namely: OQTAL and HEX2OQTAL. These controllers share three key properties that are thoroughly proven and shown in this thesis; stability, accuracy and adaptability. Stability is rigorously demonstrated for both controllers. Accuracy is shown in results of comparing these novel controllers with other industry standard algorithms in several different scenarios: there is a gain in accuracy of at least 15% for each controller, and in many cases much more than that. A new tuning algorithm based on swarm heuristics optimisation was developed as well as part of this research in order to tune in an online manner the standard Proportional-Integral-Derivative (PID) controllers used for benchmarking. Finally, adaptability of these controllers can be seen as a combination of four elements: mathematical model extensibility, cost matrices tuning, reduced computation time required and finally no prior knowledge of the navigation or guidance strategies needed. Further simulations in real planetary landing trajectories has shown that these controllers have the capacity of achieving landing errors in the order of pinpoint landing requirements, making them not only very precise UAV controllers, but also potential candidates for pinpoint landing unmanned missions.

  20. On the feedback error compensation for adaptive modulation and coding scheme

    KAUST Repository

    Choi, Seyeong

    2011-11-25

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify the performance of two joint AMDC schemes in the presence of feedback error, in terms of the average spectral efficiency, the average number of combined paths, and the average bit error rate. The benefit of feedback error compensation with adaptive combining is also quantified. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. Copyright (c) 2011 John Wiley & Sons, Ltd.

  1. MODELS OF AIR TRAFFIC CONTROLLERS ERRORS PREVENTION IN TERMINAL CONTROL AREAS UNDER UNCERTAINTY CONDITIONS

    Directory of Open Access Journals (Sweden)

    Volodymyr Kharchenko

    2017-03-01

    Full Text Available Purpose: the aim of this study is to research applied models of air traffic controllers’ errors prevention in terminal control areas (TMA under uncertainty conditions. In this work the theoretical framework descripting safety events and errors of air traffic controllers connected with the operations in TMA is proposed. Methods: optimisation of terminal control area formal description based on the Threat and Error management model and the TMA network model of air traffic flows. Results: the human factors variables associated with safety events in work of air traffic controllers under uncertainty conditions were obtained. The Threat and Error management model application principles to air traffic controller operations and the TMA network model of air traffic flows were proposed. Discussion: Information processing context for preventing air traffic controller errors, examples of threats in work of air traffic controllers, which are relevant for TMA operations under uncertainty conditions.

  2. Error message recording and reporting in the SLC control system

    International Nuclear Information System (INIS)

    Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.

    1985-01-01

    Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed

  3. Error message recording and reporting in the SLC control system

    International Nuclear Information System (INIS)

    Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.

    1985-04-01

    Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed

  4. Nonlinear control of ships minimizing the position tracking errors

    Directory of Open Access Journals (Sweden)

    Svein P. Berge

    1999-07-01

    Full Text Available In this paper, a nonlinear tracking controller with integral action for ships is presented. The controller is based on state feedback linearization. Exponential convergence of the vessel-fixed position and velocity errors are proven by using Lyapunov stability theory. Since we only have two control devices, a rudder and a propeller, we choose to control the longship and the sideship position errors to zero while the heading is stabilized indirectly. A Virtual Reference Point (VRP is defined at the bow or ahead of the ship. The VRP is used for tracking control. It is shown that the distance from the center of rotation to the VRP will influence on the stability of the zero dynamics. By selecting the VRP at the bow or even ahead of the bow, the damping in yaw can be increased and the zero dynamics is stabilized. Hence, the heading angle will be less sensitive to wind, currents and waves. The control law is simulated by using a nonlinear model of the Japanese training ship Shiojimaru with excellent results. Wind forces are added to demonstrate the robustness and performance of the integral controller.

  5. Coordinated joint motion control system with position error correction

    Science.gov (United States)

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  6. Minimizing human error in radiopharmaceutical preparation and administration via a bar code-enhanced nuclear pharmacy management system.

    Science.gov (United States)

    Hakala, John L; Hung, Joseph C; Mosman, Elton A

    2012-09-01

    The objective of this project was to ensure correct radiopharmaceutical administration through the use of a bar code system that links patient and drug profiles with on-site information management systems. This new combined system would minimize the amount of manual human manipulation, which has proven to be a primary source of error. The most common reason for dosing errors is improper patient identification when a dose is obtained from the nuclear pharmacy or when a dose is administered. A standardized electronic transfer of information from radiopharmaceutical preparation to injection will further reduce the risk of misadministration. Value stream maps showing the flow of the patient dose information, as well as potential points of human error, were developed. Next, a future-state map was created that included proposed corrections for the most common critical sites of error. Transitioning the current process to the future state will require solutions that address these sites. To optimize the future-state process, a bar code system that links the on-site radiology management system with the nuclear pharmacy management system was proposed. A bar-coded wristband connects the patient directly to the electronic information systems. The bar code-enhanced process linking the patient dose with the electronic information reduces the number of crucial points for human error and provides a framework to ensure that the prepared dose reaches the correct patient. Although the proposed flowchart is designed for a site with an in-house central nuclear pharmacy, much of the framework could be applied by nuclear medicine facilities using unit doses. An electronic connection between information management systems to allow the tracking of a radiopharmaceutical from preparation to administration can be a useful tool in preventing the mistakes that are an unfortunate reality for any facility.

  7. Human error mode identification for NPP main control room operations using soft controls

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jang, Seung-Cheol

    2011-01-01

    The operation environment of main control rooms (MCRs) in modern nuclear power plants (NPPs) has considerably changed over the years. Advanced MCRs, which have been designed by adapting digital and computer technologies, have simpler interfaces using large display panels, computerized displays, soft controls, computerized procedure systems, and so on. The actions for the NPP operations are performed using soft controls in advanced MCRs. Soft controls have different features from conventional controls. Operators need to navigate the screens to find indicators and controls and manipulate controls using a mouse, touch screens, and so on. Due to these different interfaces, different human errors should be considered in the human reliability analysis (HRA) for advanced MCRs. In this work, human errors that could occur during operation executions using soft controls were analyzed. This work classified the human errors in soft controls into six types, and the reasons that affect the occurrence of the human errors were also analyzed. (author)

  8. Errors in patient specimen collection: application of statistical process control.

    Science.gov (United States)

    Dzik, Walter Sunny; Beckman, Neil; Selleng, Kathleen; Heddle, Nancy; Szczepiorkowski, Zbigniew; Wendel, Silvano; Murphy, Michael

    2008-10-01

    Errors in the collection and labeling of blood samples for pretransfusion testing increase the risk of transfusion-associated patient morbidity and mortality. Statistical process control (SPC) is a recognized method to monitor the performance of a critical process. An easy-to-use SPC method was tested to determine its feasibility as a tool for monitoring quality in transfusion medicine. SPC control charts were adapted to a spreadsheet presentation. Data tabulating the frequency of mislabeled and miscollected blood samples from 10 hospitals in five countries from 2004 to 2006 were used to demonstrate the method. Control charts were produced to monitor process stability. The participating hospitals found the SPC spreadsheet very suitable to monitor the performance of the sample labeling and collection and applied SPC charts to suit their specific needs. One hospital monitored subcategories of sample error in detail. A large hospital monitored the number of wrong-blood-in-tube (WBIT) events. Four smaller-sized facilities, each following the same policy for sample collection, combined their data on WBIT samples into a single control chart. One hospital used the control chart to monitor the effect of an educational intervention. A simple SPC method is described that can monitor the process of sample collection and labeling in any hospital. SPC could be applied to other critical steps in the transfusion processes as a tool for biovigilance and could be used to develop regional or national performance standards for pretransfusion sample collection. A link is provided to download the spreadsheet for free.

  9. The using of the control room automation against human errors

    International Nuclear Information System (INIS)

    Kautto, A.

    1993-01-01

    The control room automation has developed very strongly during the 80's in IVO (Imatran Voima Oy). The former work expanded strongly with building of the full scope training simulator to the Loviisa plant. The important milestones has been, for example the testing of the Critical Function Monitoring System, a concept developed by Combustion Eng. Inc., in Loviisa training simulator 1982, the replacing of the process and simulator computers in Loviisa 1989, and 1990 and the presenting the use of the computer based procedures in training of operators 1993. With developing of automation and procedures it is possible to minimize the probability of human error. However, it is not possible totally eliminate the risks caused by human errors. (orig.)

  10. Quantum quasi-cyclic low-density parity-check error-correcting codes

    International Nuclear Information System (INIS)

    Yuan, Li; Gui-Hua, Zeng; Lee, Moon Ho

    2009-01-01

    In this paper, we propose the approach of employing circulant permutation matrices to construct quantum quasicyclic (QC) low-density parity-check (LDPC) codes. Using the proposed approach one may construct some new quantum codes with various lengths and rates of no cycles-length 4 in their Tanner graphs. In addition, these constructed codes have the advantages of simple implementation and low-complexity encoding. Finally, the decoding approach for the proposed quantum QC LDPC is investigated. (general)

  11. Unequal Error Protected JPEG 2000 Broadcast Scheme with Progressive Fountain Codes

    OpenAIRE

    Chen, Zhao; Xu, Mai; Yin, Luiguo; Lu, Jianhua

    2012-01-01

    This paper proposes a novel scheme, based on progressive fountain codes, for broadcasting JPEG 2000 multimedia. In such a broadcast scheme, progressive resolution levels of images/video have been unequally protected when transmitted using the proposed progressive fountain codes. With progressive fountain codes applied in the broadcast scheme, the resolutions of images (JPEG 2000) or videos (MJPEG 2000) received by different users can be automatically adaptive to their channel qualities, i.e. ...

  12. [Transposition errors during learning to reproduce a sequence by the right- and the left-hand movements: simulation of positional and movement coding].

    Science.gov (United States)

    Liakhovetskiĭ, V A; Bobrova, E V; Skopin, G N

    2012-01-01

    Transposition errors during the reproduction of a hand movement sequence make it possible to receive important information on the internal representation of this sequence in the motor working memory. Analysis of such errors showed that learning to reproduce sequences of the left-hand movements improves the system of positional coding (coding ofpositions), while learning of the right-hand movements improves the system of vector coding (coding of movements). Learning of the right-hand movements after the left-hand performance involved the system of positional coding "imposed" by the left hand. Learning of the left-hand movements after the right-hand performance activated the system of vector coding. Transposition errors during learning to reproduce movement sequences can be explained by neural network using either vector coding or both vector and positional coding.

  13. Implementation of Energy Code Controls Requirements in New Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hart, Philip R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hatten, Mike [Solarc Energy Group, LLC, Seattle, WA (United States); Jones, Dennis [Group 14 Engineering, Inc., Denver, CO (United States); Cooper, Matthew [Group 14 Engineering, Inc., Denver, CO (United States)

    2017-03-24

    Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research is to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.

  14. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    International Nuclear Information System (INIS)

    Kim, Isaac H.

    2011-01-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  15. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    Science.gov (United States)

    Kim, Isaac H.

    2011-05-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  16. Next generation Zero-Code control system UI

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Developing ergonomic user interfaces for control systems is challenging, especially during machine upgrade and commissioning where several small changes may suddenly be required. Zero-code systems, such as *Inspector*, provide agile features for creating and maintaining control system interfaces. More so, these next generation Zero-code systems bring simplicity and uniformity and brake the boundaries between Users and Developers. In this talk we present *Inspector*, a CERN made Zero-code application development system, and we introduce the major differences and advantages of using Zero-code control systems to develop operational UI.

  17. Five-way Smoking Status Classification Using Text Hot-Spot Identification and Error-correcting Output Codes

    OpenAIRE

    Cohen, Aaron M.

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2...

  18. Decoding error-correcting codes with Gröbner bases

    NARCIS (Netherlands)

    Bulygin, S.; Pellikaan, G.R.; Veldhuis, R.; Cronie, H.; Hoeksema, H.

    2007-01-01

    The decoding of arbitrary linear block codes is accomplished by solving a system of quadratic equations by means of Buchberger’s algorithm for finding a Gröbner basis. This generalizes the algorithm of Berlekamp-Massey for decoding Reed Solomon, Goppa and cyclic codes up to half the true minimum

  19. Procedure and code for calculating black control rods taking into account epithermal absorption, code CAS-1

    International Nuclear Information System (INIS)

    Martinc, R.; Trivunac, N.; Zivkovic, Z.

    1964-12-01

    This report describes the computer code CAS-1, calculation method and procedure applied for calculating the black control rods taking into account the epithermal neutron absorption. Results obtained for supercell method applied for regular lattice reflected in the multiplication medium is part of this report in addition to the computer code manual

  20. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  1. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    Science.gov (United States)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  2. On the feedback error compensation for adaptive modulation and coding scheme

    KAUST Repository

    Choi, Seyeong; Yang, Hong-Chuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify

  3. VLSI IMPLEMENTATION OF NOVEL ROUND KEYS GENERATION SCHEME FOR CRYPTOGRAPHY APPLICATIONS BY ERROR CONTROL ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. SENTHILKUMAR

    2015-05-01

    Full Text Available A novel implementation of code based cryptography (Cryptocoding technique for multi-layer key distribution scheme is presented. VLSI chip is designed for storing information on generation of round keys. New algorithm is developed for reduced key size with optimal performance. Error Control Algorithm is employed for both generation of round keys and diffusion of non-linearity among them. Two new functions for bit inversion and its reversal are developed for cryptocoding. Probability of retrieving original key from any other round keys is reduced by diffusing nonlinear selective bit inversions on round keys. Randomized selective bit inversions are done on equal length of key bits by Round Constant Feedback Shift Register within the error correction limits of chosen code. Complexity of retrieving the original key from any other round keys is increased by optimal hardware usage. Proposed design is simulated and synthesized using VHDL coding for Spartan3E FPGA and results are shown. Comparative analysis is done between 128 bit Advanced Encryption Standard round keys and proposed round keys for showing security strength of proposed algorithm. This paper concludes that chip based multi-layer key distribution of proposed algorithm is an enhanced solution to the existing threats on cryptography algorithms.

  4. Real-time minimal-bit-error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  5. Real-time minimal bit error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  6. Tri-code inductance control rod position indicator with several multi-coding-bars

    International Nuclear Information System (INIS)

    Shi Jibin; Jiang Yueyuan; Wang Wenran

    2004-01-01

    A control rod position indicator named as tri-code inductance control rod position indicator with multi-coding-bars, which possesses simple structure, reliable operation and high precision, is developed. The detector of the indicator is composed of K coils, a compensatory coil and K coding bars. Each coding bar consists of several sections of strong magnetic cores, several sections of weak magnetic cores and several sections of non-magnetic portions. As the control rod is withdrawn, the coding bars move in the center of the coils respectively, while the constant alternating current passes the coils and makes them to create inductance alternating voltage signals. The outputs of the coils are picked and processed, and the tri-codes indicating rod position can be gotten. Moreover, the coding principle of the detector and its related structure are introduced. The analysis shows that the indicator owns more advantage over the coils-coding rod position indicator, so it can meet the demands of the rod position indicating in nuclear heating reactor (NHR). (authors)

  7. Statistical method for quality control in presence of measurement errors

    International Nuclear Information System (INIS)

    Lauer-Peccoud, M.R.

    1998-01-01

    In a quality inspection of a set of items where the measurements of values of a quality characteristic of the item are contaminated by random errors, one can take wrong decisions which are damageable to the quality. So of is important to control the risks in such a way that a final quality level is insured. We consider that an item is defective or not if the value G of its quality characteristic is larger or smaller than a given level g. We assume that, due to the lack of precision of the measurement instrument, the measurement M of this characteristic is expressed by ∫ (G) + ξ where f is an increasing function such that the value ∫ (g 0 ) is known and ξ is a random error with mean zero and given variance. First we study the problem of the determination of a critical measure m such that a specified quality target is reached after the classification of a lot of items where each item is accepted or rejected depending on whether its measurement is smaller or greater than m. Then we analyse the problem of testing the global quality of a lot from the measurements for a example of items taken from the lot. For these two kinds of problems and for different quality targets, we propose solutions emphasizing on the case where the function ∫ is linear and the error ξ and the variable G are Gaussian. Simulation results allow to appreciate the efficiency of the different considered control procedures and their robustness with respect to deviations from the assumptions used in the theoretical derivations. (author)

  8. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    International Nuclear Information System (INIS)

    Aljneibi, Hanan Salah Ali; Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun

    2015-01-01

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation

  9. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    Energy Technology Data Exchange (ETDEWEB)

    Aljneibi, Hanan Salah Ali [Khalifa Univ., Abu Dhabi (United Arab Emirates); Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2015-10-15

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation.

  10. Error analysis of acceleration control loops of a synchrotron

    International Nuclear Information System (INIS)

    Zhang, S.Y.; Weng, W.T.

    1991-01-01

    For beam control during acceleration, it is conventional to derive the frequency from an external reference, be it a field marker or an external oscillator, to provide phase and radius feedback loops to ensure the phase stability, radial position and emittance integrity of the beam. The open and closed loop behaviors of both feedback control and their response under the possible frequency, phase and radius errors are derived from fundamental principles and equations. The stability of the loops is investigated under a wide range of variations of the gain and time delays. Actual system performance of the AGS Booster is analyzed and compared to commissioning experiences. Such analysis is useful for setting design criteria and tolerances for new proton synchrotrons. 4 refs., 13 figs

  11. Error correcting code with chip kill capability and power saving enhancement

    Energy Technology Data Exchange (ETDEWEB)

    Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY

    2011-08-30

    A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.

  12. Control of error and convergence in ODE solvers

    International Nuclear Information System (INIS)

    Gustafsson, K.

    1992-03-01

    Feedback is a general principle that can be used in many different contexts. In this thesis it is applied to numerical integration of ordinary differential equations. An advanced integration method includes parameters and variables that should be adjusted during the execution. In addition, the integration method should be able to automatically handle situations such as: initialization, restart after failures, etc. In this thesis we regard the algorithms for parameter adjustment and supervision as a controller. The controlled measures different variable that tell the current status of the integration, and based on this information it decides how to continue. The design of the controller is vital in order to accurately and efficiently solve a large class of ordinary differential equations. The application of feedback control may appear farfetched, but numerical integration methods are in fact dynamical systems. This is often overlooked in traditional numerical analysis. We derive dynamic models that describe the behavior of the integration method as well as the standard control algorithms in use today. Using these models it is possible to analyze properties of current algorithms, and also explain some generally observed misbehaviors. Further, we use the acquired insight to derive new and improved control algorithms, both for explicit and implicit Runge-Kutta methods. In the explicit case, the new controller gives good overall performance. In particular it overcomes the problem with oscillating stepsize sequence that is often experienced when the stepsize is restricted by numerical stability. The controller for implicit methods is designed so that it tracks changes in the differential equation better than current algorithms. In addition, it includes a new strategy for the equation solver, which allows the stepsize to vary more freely. This leads to smoother error control without excessive operations on the iteration matrix. (87 refs.) (au)

  13. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  14. Pupil dilation indicates the coding of past prediction errors: Evidence for attentional learning theory.

    Science.gov (United States)

    Koenig, Stephan; Uengoer, Metin; Lachnit, Harald

    2018-04-01

    The attentional learning theory of Pearce and Hall () predicts more attention to uncertain cues that have caused a high prediction error in the past. We examined how the cue-elicited pupil dilation during associative learning was linked to such error-driven attentional processes. In three experiments, participants were trained to acquire associations between different cues and their appetitive (Experiment 1), motor (Experiment 2), or aversive (Experiment 3) outcomes. All experiments were designed to examine differences in the processing of continuously reinforced cues (consistently followed by the outcome) versus partially reinforced, uncertain cues (randomly followed by the outcome). We measured the pupil dilation elicited by the cues in anticipation of the outcome and analyzed how this conditioned pupil response changed over the course of learning. In all experiments, changes in pupil size complied with the same basic pattern: During early learning, consistently reinforced cues elicited greater pupil dilation than uncertain, randomly reinforced cues, but this effect gradually reversed to yield a greater pupil dilation for uncertain cues toward the end of learning. The pattern of data accords with the changes in prediction error and error-driven attention formalized by the Pearce-Hall theory. © 2017 The Authors. Psychophysiology published by Wiley Periodicals, Inc. on behalf of Society for Psychophysiological Research.

  15. Control rod computer code IAMCOS: general theory and numerical methods

    International Nuclear Information System (INIS)

    West, G.

    1982-11-01

    IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr

  16. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response

    Directory of Open Access Journals (Sweden)

    Takahiro eSoshi

    2015-01-01

    Full Text Available Post-error slowing is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms. Neural correlates of post-error processing were examined using event-related potentials (ERPs. Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS. Behavioral results demonstrated that the commission error for No-go trials was 15%, but post-error slowing did not take place immediately. Delayed post-error slowing was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to post-error slowing. Stimulus-locked N2 was negatively correlated with post-error slowing and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater post-error slowing and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and post-error slowing did not occur quickly. Furthermore, post-error slowing and its neural correlate (N2 were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke

  17. What Do Letter Migration Errors Reveal About Letter Position Coding in Visual Word Recognition?

    Science.gov (United States)

    Davis, Colin J.; Bowers, Jeffrey S.

    2004-01-01

    Dividing attention across multiple words occasionally results in misidentifications whereby letters apparently migrate between words. Previous studies have found that letter migrations preserve within-word letter position, which has been interpreted as support for position-specific letter coding. To investigate this issue, the authors used word…

  18. Bounded distance decoding of linear error-correcting codes with Gröbner bases

    NARCIS (Netherlands)

    Bulygin, S.; Pellikaan, G.R.

    2009-01-01

    The problem of bounded distance decoding of arbitrary linear codes using Gröbner bases is addressed. A new method is proposed, which is based on reducing an initial decoding problem to solving a certain system of polynomial equations over a finite field. The peculiarity of this system is that, when

  19. An Analysis and Quantification Method of Human Errors of Soft Controls in Advanced MCRs

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jae Whan; Jang, Seung Cheol

    2011-01-01

    In this work, a method was proposed for quantifying human errors that may occur during operation executions using soft control. Soft controls of advanced main control rooms (MCRs) have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to define the human error modes and to quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests a modified K-HRA method for quantifying error probability

  20. Controlling Energy Radiations of Electromagnetic Waves via Frequency Coding Metamaterials.

    Science.gov (United States)

    Wu, Haotian; Liu, Shuo; Wan, Xiang; Zhang, Lei; Wang, Dan; Li, Lianlin; Cui, Tie Jun

    2017-09-01

    Metamaterials are artificial structures composed of subwavelength unit cells to control electromagnetic (EM) waves. The spatial coding representation of metamaterial has the ability to describe the material in a digital way. The spatial coding metamaterials are typically constructed by unit cells that have similar shapes with fixed functionality. Here, the concept of frequency coding metamaterial is proposed, which achieves different controls of EM energy radiations with a fixed spatial coding pattern when the frequency changes. In this case, not only different phase responses of the unit cells are considered, but also different phase sensitivities are also required. Due to different frequency sensitivities of unit cells, two units with the same phase response at the initial frequency may have different phase responses at higher frequency. To describe the frequency coding property of unit cell, digitalized frequency sensitivity is proposed, in which the units are encoded with digits "0" and "1" to represent the low and high phase sensitivities, respectively. By this merit, two degrees of freedom, spatial coding and frequency coding, are obtained to control the EM energy radiations by a new class of frequency-spatial coding metamaterials. The above concepts and physical phenomena are confirmed by numerical simulations and experiments.

  1. Measurement of Compression Factor and Error Sensitivity Factor of the Modified READ Facsimile Coding Technique.

    Science.gov (United States)

    1980-08-01

    Compression factor and error sensitivity together with statistical data have also been tabulated. This TIB is a companion drcument to NCS TIB’s 79-7...vu donner la priorit6 pour lour r~alisation. Chaque application est conf ice A un " chef do projet", responsable successivoment do sa conception. de son...pilote depend des r~sultats obtenus et fait I’objet d’une d~cision- de ’.a Direction Gdnerale. Ndanmoins, le chef do projet doit dOs le d~part consid~rer

  2. A study of redundancy management strategy for tetrad strap-down inertial systems. [error detection codes

    Science.gov (United States)

    Hruby, R. J.; Bjorkman, W. S.; Schmidt, S. F.; Carestia, R. A.

    1979-01-01

    Algorithms were developed that attempt to identify which sensor in a tetrad configuration has experienced a step failure. An algorithm is also described that provides a measure of the confidence with which the correct identification was made. Experimental results are presented from real-time tests conducted on a three-axis motion facility utilizing an ortho-skew tetrad strapdown inertial sensor package. The effects of prediction errors and of quantization on correct failure identification are discussed as well as an algorithm for detecting second failures through prediction.

  3. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    Science.gov (United States)

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  4. Medical error disclosure: from the therapeutic alliance to risk management: the vision of the new Italian code of medical ethics.

    Science.gov (United States)

    Turillazzi, Emanuela; Neri, Margherita

    2014-07-15

    The Italian code of medical deontology recently approved stipulates that physicians have the duty to inform the patient of each unwanted event and its causes, and to identify, report and evaluate adverse events and errors. Thus the obligation to supply information continues to widen, in some way extending beyond the doctor-patient relationship to become an essential tool for improving the quality of professional services. The new deontological precepts intersect two areas in which the figure of the physician is paramount. On the one hand is the need for maximum integrity towards the patient, in the name of the doctor's own, and the other's (the patient's) dignity and liberty; on the other is the physician's developing role in the strategies of the health system to achieve efficacy, quality, reliability and efficiency, to reduce errors and adverse events and to manage clinical risk. In Italy, due to guidelines issued by the Ministry of Health and to the new code of medical deontology, the role of physicians becomes a part of a complex strategy of risk management based on a system focused approach in which increasing transparency regarding adverse outcomes and full disclosure of health- related negative events represent a key factor.

  5. Medical error disclosure: from the therapeutic alliance to risk management: the vision of the new Italian code of medical ethics

    Science.gov (United States)

    2014-01-01

    Background The Italian code of medical deontology recently approved stipulates that physicians have the duty to inform the patient of each unwanted event and its causes, and to identify, report and evaluate adverse events and errors. Thus the obligation to supply information continues to widen, in some way extending beyond the doctor-patient relationship to become an essential tool for improving the quality of professional services. Discussion The new deontological precepts intersect two areas in which the figure of the physician is paramount. On the one hand is the need for maximum integrity towards the patient, in the name of the doctor’s own, and the other’s (the patient’s) dignity and liberty; on the other is the physician’s developing role in the strategies of the health system to achieve efficacy, quality, reliability and efficiency, to reduce errors and adverse events and to manage clinical risk. Summary In Italy, due to guidelines issued by the Ministry of Health and to the new code of medical deontology, the role of physicians becomes a part of a complex strategy of risk management based on a system focused approach in which increasing transparency regarding adverse outcomes and full disclosure of health- related negative events represent a key factor. PMID:25023339

  6. Error Correcting Codes I. Applications of Elementary Algebra to Information Theory. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 346.

    Science.gov (United States)

    Rice, Bart F.; Wilde, Carroll O.

    It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…

  7. Effect of ancilla's structure on quantum error correction using the seven-qubit Calderbank-Shor-Steane code

    International Nuclear Information System (INIS)

    Salas, P.J.; Sanz, A.L.

    2004-01-01

    In this work we discuss the ability of different types of ancillas to control the decoherence of a qubit interacting with an environment. The error is introduced into the numerical simulation via a depolarizing isotropic channel. The ranges of values considered are 10 -4 ≤ε≤10 -2 for memory errors and 3x10 -5 ≤γ/7≤10 -2 for gate errors. After the correction we calculate the fidelity as a quality criterion for the qubit recovered. We observe that a recovery method with a three-qubit ancilla provides reasonably good results bearing in mind its economy. If we want to go further, we have to use fault tolerant ancillas with a high degree of parallelism, even if this condition implies introducing additional ancilla verification qubits

  8. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  9. Sensitivity Analysis and Error Control for Computational Aeroelasticity, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposal is the development of a next-generation computational aeroelasticity code, suitable for real-world complex geometries, and...

  10. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response.

    Science.gov (United States)

    Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki

    2014-01-01

    Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.

  11. IPTV multicast with peer-assisted lossy error control

    Science.gov (United States)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  12. Internal Corrosion Control of Water Supply Systems Code of Practice

    Science.gov (United States)

    This Code of Practice is part of a series of publications by the IWA Specialist Group on Metals and Related Substances in Drinking Water. It complements the following IWA Specialist Group publications: 1. Best Practice Guide on the Control of Lead in Drinking Water 2. Best Prac...

  13. Context Specificity of Post-Error and Post-Conflict Cognitive Control Adjustments

    Science.gov (United States)

    Forster, Sarah E.; Cho, Raymond Y.

    2014-01-01

    There has been accumulating evidence that cognitive control can be adaptively regulated by monitoring for processing conflict as an index of online control demands. However, it is not yet known whether top-down control mechanisms respond to processing conflict in a manner specific to the operative task context or confer a more generalized benefit. While previous studies have examined the taskset-specificity of conflict adaptation effects, yielding inconsistent results, control-related performance adjustments following errors have been largely overlooked. This gap in the literature underscores recent debate as to whether post-error performance represents a strategic, control-mediated mechanism or a nonstrategic consequence of attentional orienting. In the present study, evidence of generalized control following both high conflict correct trials and errors was explored in a task-switching paradigm. Conflict adaptation effects were not found to generalize across tasksets, despite a shared response set. In contrast, post-error slowing effects were found to extend to the inactive taskset and were predictive of enhanced post-error accuracy. In addition, post-error performance adjustments were found to persist for several trials and across multiple task switches, a finding inconsistent with attentional orienting accounts of post-error slowing. These findings indicate that error-related control adjustments confer a generalized performance benefit and suggest dissociable mechanisms of post-conflict and post-error control. PMID:24603900

  14. Code Development for Control Design Applications: Phase I: Structural Modeling

    International Nuclear Information System (INIS)

    Bir, G. S.; Robinson, M.

    1998-01-01

    The design of integrated controls for a complex system like a wind turbine relies on a system model in an explicit format, e.g., state-space format. Current wind turbine codes focus on turbine simulation and not on system characterization, which is desired for controls design as well as applications like operating turbine model analysis, optimal design, and aeroelastic stability analysis. This paper reviews structural modeling that comprises three major steps: formation of component equations, assembly into system equations, and linearization

  15. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  16. Measuring and test equipment control through bar-code technology

    International Nuclear Information System (INIS)

    Crockett, J.D.; Carr, C.C.

    1993-01-01

    Over the past several years, the use, tracking, and documentation of measuring and test equipment (M ampersand TE) has become a major issue. New regulations are forcing companies to develop new policies for providing use history, traceability, and accountability of M ampersand TE. This paper discusses how the Fast Flux Test Facility (FFTF), operated by Westinghouse Hanford Company and located at the Hanford site in Rich- land, Washington, overcame these obstacles by using a computerized system exercising bar-code technology. A data base was developed to identify M ampersand TE containing 33 separate fields, such as manufacturer, model, range, bar-code number, and other pertinent information. A bar-code label was attached to each piece of M ampersand TE. A second data base was created to identify the employee using the M ampersand TE. The fields contained pertinent user information such as name, location, and payroll number. Each employee's payroll number was bar coded and attached to the back of their identification badge. A computer program was developed to automate certain tasks previously performed and tracked by hand. Bar-code technology was combined with this computer program to control the input and distribution of information, eliminate common mistakes, electronically store information, and reduce the time required to check out the M ampersand TE for use

  17. Factors controlling volume errors through 2D gully erosion assessment: guidelines for optimal survey design

    Science.gov (United States)

    Castillo, Carlos; Pérez, Rafael

    2017-04-01

    The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey

  18. Error Correcting Codes

    Indian Academy of Sciences (India)

    successful consumer products of all time - the Compact Disc. (CD) digital audio .... We can make ... only 2 t additional parity check symbols are required, to be able to correct t .... display information (contah'ling music related data and a table.

  19. Quantitative estimation of the human error probability during soft control operations

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jung, Wondea

    2013-01-01

    Highlights: ► An HRA method to evaluate execution HEP for soft control operations was proposed. ► The soft control tasks were analyzed and design-related influencing factors were identified. ► An application to evaluate the effects of soft controls was performed. - Abstract: In this work, a method was proposed for quantifying human errors that can occur during operation executions using soft controls. Soft controls of advanced main control rooms have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to identify the human error modes and quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests an evaluation framework for quantifying the execution error probability using soft controls. In the application result, it was observed that the human error probabilities of soft controls showed both positive and negative results compared to the conventional controls according to the design quality of advanced main control rooms

  20. Residual-based Methods for Controlling Discretization Error in CFD

    Science.gov (United States)

    2015-08-24

    ccjccjccj iVi Jwxf V dVxf V 1 ,,, )(det)( 1)(1   . (25) where J is the Jacobian of the coordinate transformation and the weights can be found from...179. Layton, W., Lee , H.K., and Peterson, J. (2002). “A Defect-Correction Method for the Incompressible Navier-Stokes Equations,” Applied Mathematics...and Computation, Vol. 129, pp. 1-19. Lee , D. and Tsuei, Y.M. (1992). “A Formula for Estimation of Truncation Errors of Convective Terms in a

  1. Computer codes for evaluation of control room habitability (HABIT)

    International Nuclear Information System (INIS)

    Stage, S.A.

    1996-06-01

    This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs

  2. Coupling of system thermal–hydraulics and Monte-Carlo code: Convergence criteria and quantification of correlation between statistical uncertainty and coupled error

    International Nuclear Information System (INIS)

    Wu, Xu; Kozlowski, Tomasz

    2015-01-01

    Highlights: • Coupling of Monte Carlo code Serpent and thermal–hydraulics code RELAP5. • A convergence criterion is developed based on the statistical uncertainty of power. • Correlation between MC statistical uncertainty and coupled error is quantified. • Both UO 2 and MOX single assembly models are used in the coupled simulation. • Validation of coupling results with a multi-group transport code DeCART. - Abstract: Coupled multi-physics approach plays an important role in improving computational accuracy. Compared with deterministic neutronics codes, Monte Carlo codes have the advantage of a higher resolution level. In the present paper, a three-dimensional continuous-energy Monte Carlo reactor physics burnup calculation code, Serpent, is coupled with a thermal–hydraulics safety analysis code, RELAP5. The coupled Serpent/RELAP5 code capability is demonstrated by the improved axial power distribution of UO 2 and MOX single assembly models, based on the OECD-NEA/NRC PWR MOX/UO 2 Core Transient Benchmark. Comparisons of calculation results using the coupled code with those from the deterministic methods, specifically heterogeneous multi-group transport code DeCART, show that the coupling produces more precise results. A new convergence criterion for the coupled simulation is developed based on the statistical uncertainty in power distribution in the Monte Carlo code, rather than ad-hoc criteria used in previous research. The new convergence criterion is shown to be more rigorous, equally convenient to use but requiring a few more coupling steps to converge. Finally, the influence of Monte Carlo statistical uncertainty on the coupled error of power and thermal–hydraulics parameters is quantified. The results are presented such that they can be used to find the statistical uncertainty to use in Monte Carlo in order to achieve a desired precision in coupled simulation

  3. Disturbance Error Reduction in Multivariable Optimal Control Systems

    Directory of Open Access Journals (Sweden)

    Ole A. Solheim

    1983-01-01

    Full Text Available The paper deals with the design of optimal multivariable controllers, using a modified LQR approach. All controllers discussed contain proportional feedback and, in addition, there may be feedforward, integral action or state estimation.

  4. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  5. Human error recovery failure probability when using soft controls in computerized control rooms

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Seong, Poong Hyun; Jung, Wondea

    2014-01-01

    Many literatures categorized recovery process into three phases; detection of problem situation, explanation of problem causes or countermeasures against problem, and end of recovery. Although the focus of recovery promotion has been on categorizing recovery phases and modeling recovery process, research related to human recovery failure probabilities has not been perform actively. On the other hand, a few study regarding recovery failure probabilities were implemented empirically. Summarizing, researches that have performed so far have several problems in terms of use in human reliability analysis (HRA). By adopting new human-system interfaces that are based on computer-based technologies, the operation environment of MCRs in NPPs has changed from conventional MCRs to advanced MCRs. Because of the different interfaces between conventional and advanced MCRs, different recovery failure probabilities should be considered in the HRA for advanced MCRs. Therefore, this study carries out an empirical analysis of human error recovery probabilities under an advanced MCR mockup called compact nuclear simulator (CNS). The aim of this work is not only to compile a recovery failure probability database using the simulator for advanced MCRs but also to collect recovery failure probability according to defined human error modes to compare that which human error mode has highest recovery failure probability. The results show that recovery failure probability regarding wrong screen selection was lowest among human error modes, which means that most of human error related to wrong screen selection can be recovered. On the other hand, recovery failure probabilities of operation selection omission and delayed operation were 1.0. These results imply that once subject omitted one task in the procedure, they have difficulties finding and recovering their errors without supervisor's assistance. Also, wrong screen selection had an effect on delayed operation. That is, wrong screen

  6. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper

    2015-01-07

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.

  7. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  8. Continuous fractional-order Zero Phase Error Tracking Control.

    Science.gov (United States)

    Liu, Lu; Tian, Siyuan; Xue, Dingyu; Zhang, Tao; Chen, YangQuan

    2018-04-01

    A continuous time fractional-order feedforward control algorithm for tracking desired time varying input signals is proposed in this paper. The presented controller cancels the phase shift caused by the zeros and poles of controlled closed-loop fractional-order system, so it is called Fractional-Order Zero Phase Tracking Controller (FZPETC). The controlled systems are divided into two categories i.e. with and without non-cancellable (non-minimum-phase) zeros which stand in unstable region or on stability boundary. Each kinds of systems has a targeted FZPETC design control strategy. The improved tracking performance has been evaluated successfully by applying the proposed controller to three different kinds of fractional-order controlled systems. Besides, a modified quasi-perfect tracking scheme is presented for those systems which may not have available future tracking trajectory information or have problem in high frequency disturbance rejection if the perfect tracking algorithm is applied. A simulation comparison and a hardware-in-the-loop thermal peltier platform are shown to validate the practicality of the proposed quasi-perfect control algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. A posteriori error analysis for hydro-mechanical couplings and implementation in Code-Aster; Analyse d'erreur a posteriori pour les couplages hydro-mecaniques et mise en oeuvre dans Code-Aster

    Energy Technology Data Exchange (ETDEWEB)

    Meunier, S

    2007-11-15

    We analyse approximations by finite elements in space and finite differences in time of coupled Hydro-Mechanical (HM) problems related to the quasi-static linear poro-elasticity theory. The physical bases of this theory are briefly restated and an abstract setting is proposed to perform the mathematical study of the stationary and un-stationary versions of the HM problem. For the stationary version, the well-posedness of the continuous and discrete problems are established and the a priori error analysis is performed. Then, we propose the a posteriori error analysis by using two different techniques suited to estimate the displacement error and the pressure error, respectively, both in the H{sub x}{sup 1}-norm. The classical properties of reliability and optimality are proved for the associated error estimators. Some numerical experiments using Code-Aster illustrate the theoretical results. For the un-stationary version, we first establish a stability result for the continuous problem. Then, we present an optimal a priori error analysis using elliptic projection techniques. Finally, the a posteriori error analysis is performed by using two different approaches: a direct approach and an elliptic reconstruction approach. The first is suited to estimate the pressure error in the L{sub t}{sup 2}(H{sub x}{sup 1})-norm and the second is suited to estimate the displacement error in the L{sub t}{sup {infinity}}(H{sub x}{sup 1})-norm and the pressure error in the L{sub t}{sup {infinity}}(H{sub x}{sup 1})-norm. Numerical experiments using Code-Aster complete the theoretical results. (author)

  10. Method of laser beam coding for control systems

    Science.gov (United States)

    Pałys, Tomasz; Arciuch, Artur; Walczak, Andrzej; Murawski, Krzysztof

    2017-08-01

    The article presents the method of encoding a laser beam for control systems. The experiments were performed using a red laser emitting source with a wavelength of λ = 650 nm and a power of P ≍ 3 mW. The aim of the study was to develop methods of modulation and demodulation of the laser beam. Results of research, in which we determined the effect of selected camera parameters, such as image resolution, number of frames per second on the result of demodulation of optical signal, is also shown in the paper. The experiments showed that the adopted coding method provides sufficient information encoded in a single laser beam (36 codes with the effectiveness of decoding at 99.9%).

  11. Analysis technique for controlling system wavefront error with active/adaptive optics

    Science.gov (United States)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  12. Subdivision Error Analysis and Compensation for Photoelectric Angle Encoder in a Telescope Control System

    Directory of Open Access Journals (Sweden)

    Yanrui Su

    2015-01-01

    Full Text Available As the position sensor, photoelectric angle encoder affects the accuracy and stability of telescope control system (TCS. A TCS-based subdivision error compensation method for encoder is proposed. Six types of subdivision error sources are extracted through mathematical expressions of subdivision signals first. Then the period length relationships between subdivision signals and subdivision errors are deduced. And the error compensation algorithm only utilizing the shaft position of TCS is put forward, along with two control models; Model I is that the algorithm applies only to the speed loop of TCS and Model II is applied to both speed loop and position loop. Combined with actual project, elevation jittering phenomenon of the telescope is discussed to decide the necessity of DC-type subdivision error compensation. Low-speed elevation performance before and after error compensation is compared to help decide that Model II is preferred. In contrast to original performance, the maximum position error of the elevation with DC subdivision error compensation is reduced by approximately 47.9% from 1.42″ to 0.74″. The elevation gets a huge decrease in jitters. This method can compensate the encoder subdivision errors effectively and improve the stability of TCS.

  13. Quantum control using genetic algorithms in quantum communication: superdense coding

    International Nuclear Information System (INIS)

    Domínguez-Serna, Francisco; Rojas, Fernando

    2015-01-01

    We present a physical example model of how Quantum Control with genetic algorithms is applied to implement the quantum superdense code protocol. We studied a model consisting of two quantum dots with an electron with spin, including spin-orbit interaction. The electron and the spin get hybridized with the site acquiring two degrees of freedom, spin and charge. The system has tunneling and site energies as time dependent control parameters that are optimized by means of genetic algorithms to prepare a hybrid Bell-like state used as a transmission channel. This state is transformed to obtain any state of the four Bell basis as required by superdense protocol to transmit two bits of classical information. The control process protocol is equivalent to implement one of the quantum gates in the charge subsystem. Fidelities larger than 99.5% are achieved for the hybrid entangled state preparation and the superdense operations. (paper)

  14. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    Science.gov (United States)

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  15. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system

  16. SCHEME (Soft Control Human error Evaluation MEthod) for advanced MCR HRA

    International Nuclear Information System (INIS)

    Jang, Inseok; Jung, Wondea; Seong, Poong Hyun

    2015-01-01

    The Technique for Human Error Rate Prediction (THERP), Korean Human Reliability Analysis (K-HRA), Human Error Assessment and Reduction Technique (HEART), A Technique for Human Event Analysis (ATHEANA), Cognitive Reliability and Error Analysis Method (CREAM), and Simplified Plant Analysis Risk Human Reliability Assessment (SPAR-H) in relation to NPP maintenance and operation. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). They are still used for HRA in advanced MCRs even though the operating environment of advanced MCRs in NPPs has been considerably changed by the adoption of new human-system interfaces such as computer-based soft controls. Among the many features in advanced MCRs, soft controls are an important feature because the operation action in NPP advanced MCRs is performed by soft controls. Consequently, those conventional methods may not sufficiently consider the features of soft control execution human errors. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing the soft control task analysis and the literature reviews regarding widely accepted human error taxonomies. In this study, the framework of a HRA method for evaluating soft control execution human error in advanced MCRs is developed. First, the factors which HRA method in advanced MCRs should encompass are derived based on the literature review, and soft control task analysis. Based on the derived factors, execution HRA framework in advanced MCRs is developed mainly focusing on the features of soft control. Moreover, since most current HRA database deal with operation in conventional type of MCRs and are not explicitly designed to deal with digital HSI, HRA database are developed under lab scale simulation

  17. Artificial neural network implementation of a near-ideal error prediction controller

    Science.gov (United States)

    Mcvey, Eugene S.; Taylor, Lynore Denise

    1992-01-01

    A theory has been developed at the University of Virginia which explains the effects of including an ideal predictor in the forward loop of a linear error-sampled system. It has been shown that the presence of this ideal predictor tends to stabilize the class of systems considered. A prediction controller is merely a system which anticipates a signal or part of a signal before it actually occurs. It is understood that an exact prediction controller is physically unrealizable. However, in systems where the input tends to be repetitive or limited, (i.e., not random) near ideal prediction is possible. In order for the controller to act as a stability compensator, the predictor must be designed in a way that allows it to learn the expected error response of the system. In this way, an unstable system will become stable by including the predicted error in the system transfer function. Previous and current prediction controller include pattern recognition developments and fast-time simulation which are applicable to the analysis of linear sampled data type systems. The use of pattern recognition techniques, along with a template matching scheme, has been proposed as one realizable type of near-ideal prediction. Since many, if not most, systems are repeatedly subjected to similar inputs, it was proposed that an adaptive mechanism be used to 'learn' the correct predicted error response. Once the system has learned the response of all the expected inputs, it is necessary only to recognize the type of input with a template matching mechanism and then to use the correct predicted error to drive the system. Suggested here is an alternate approach to the realization of a near-ideal error prediction controller, one designed using Neural Networks. Neural Networks are good at recognizing patterns such as system responses, and the back-propagation architecture makes use of a template matching scheme. In using this type of error prediction, it is assumed that the system error

  18. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...

  19. Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme

    Directory of Open Access Journals (Sweden)

    Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin

    2012-08-01

    Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words:  Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.

  20. [Statistical Process Control (SPC) can help prevent treatment errors without increasing costs in radiotherapy].

    Science.gov (United States)

    Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C

    2010-01-01

    Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.

  1. UNICOS CPC6: automated code generation for process control applications

    International Nuclear Information System (INIS)

    Fernandez Adiego, B.; Blanco Vinuela, E.; Prieto Barreiro, I.

    2012-01-01

    The Continuous Process Control package (CPC) is one of the components of the CERN Unified Industrial Control System framework (UNICOS). As a part of this framework, UNICOS-CPC provides a well defined library of device types, a methodology and a set of tools to design and implement industrial control applications. The new CPC version uses the software factory UNICOS Application Builder (UAB) to develop CPC applications. The CPC component is composed of several platform oriented plug-ins (PLCs and SCADA) describing the structure and the format of the generated code. It uses a resource package where both, the library of device types and the generated file syntax, are defined. The UAB core is the generic part of this software, it discovers and calls dynamically the different plug-ins and provides the required common services. In this paper the UNICOS CPC6 package is introduced. It is composed of several plug-ins: the Instance generator and the Logic generator for both, Siemens and Schneider PLCs, the SCADA generator (based on PVSS) and the CPC wizard as a dedicated plug-in created to provide the user a friendly GUI (Graphical User Interface). A tool called UAB Bootstrap will manage the different UAB components, like CPC, and its dependencies with the resource packages. This tool guides the control system developer during the installation, update and execution of the UAB components. (authors)

  2. The accuracy of webcams in 2D motion analysis: sources of error and their control

    International Nuclear Information System (INIS)

    Page, A; Candelas, P; Belmar, F; Moreno, R

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented. Finally, an experiment with controlled movement is performed to experimentally measure the errors described above and to assess the effectiveness of the proposed corrective measures. It will be shown that when these aspects are considered, it is possible to obtain errors lower than 0.1%. This level of accuracy demonstrates that webcams should be considered as very precise and accurate measuring instruments at a remarkably low cost

  3. The accuracy of webcams in 2D motion analysis: sources of error and their control

    Energy Technology Data Exchange (ETDEWEB)

    Page, A; Candelas, P; Belmar, F [Departamento de Fisica Aplicada, Universidad Politecnica de Valencia, Valencia (Spain); Moreno, R [Instituto de Biomecanica de Valencia, Valencia (Spain)], E-mail: alvaro.page@ibv.upv.es

    2008-07-15

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented. Finally, an experiment with controlled movement is performed to experimentally measure the errors described above and to assess the effectiveness of the proposed corrective measures. It will be shown that when these aspects are considered, it is possible to obtain errors lower than 0.1%. This level of accuracy demonstrates that webcams should be considered as very precise and accurate measuring instruments at a remarkably low cost.

  4. Analysis of Human Error Types and Performance Shaping Factors in the Next Generation Main Control Room

    International Nuclear Information System (INIS)

    Sin, Y. C.; Jung, Y. S.; Kim, K. H.; Kim, J. H.

    2008-04-01

    Main control room of nuclear power plants has been computerized and digitalized in new and modernized plants, as information and digital technologies make great progresses and become mature. Survey on human factors engineering issues in advanced MCRs: Model-based approach, Literature survey-based approach. Analysis of human error types and performance shaping factors is analysis of three human errors. The results of project can be used for task analysis, evaluation of human error probabilities, and analysis of performance shaping factors in the HRA analysis

  5. Falls and Postural Control in Older Adults With Eye Refractive Errors

    Directory of Open Access Journals (Sweden)

    Afsun Nodehi-Moghadam

    2016-04-01

    Conclusion: Vision impairment of older adults due to refractive error is not associated with an increase in falls. Furthermore, TUG test results did not show balance disorders in these groups. Further research, such as assessment of postural control with advanced devices and considering other falling risk factors is also needed to identify the predictors of falls in older adults with eye refractive errors.

  6. An empirical study on the basic human error probabilities for NPP advanced main control room operation using soft control

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Harbi, Mohamed Ali Salem Al; Lee, Seung Jun; Kang, Hyun Gook; Seong, Poong Hyun

    2013-01-01

    Highlights: ► The operation environment of MCRs in NPPs has changed by adopting new HSIs. ► The operation action in NPP Advanced MCRs is performed by soft control. ► Different basic human error probabilities (BHEPs) should be considered. ► BHEPs in a soft control operation environment are investigated empirically. ► This work will be helpful to verify if soft control has positive or negative effects. -- Abstract: By adopting new human–system interfaces that are based on computer-based technologies, the operation environment of main control rooms (MCRs) in nuclear power plants (NPPs) has changed. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, soft controls, and so on, are called Advanced MCRs. Among the many features in Advanced MCRs, soft controls are an important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, touch screens, and so on, operators can select a specific screen, then choose the controller, and finally manipulate the devices. However, because of the different interfaces between soft control and hardwired conventional type control, different basic human error probabilities (BHEPs) should be considered in the Human Reliability Analysis (HRA) for advanced MCRs. Although there are many HRA methods to assess human reliabilities, such as Technique for Human Error Rate Prediction (THERP), Accident Sequence Evaluation Program (ASEP), Human Error Assessment and Reduction Technique (HEART), Human Event Repository and Analysis (HERA), Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR), Cognitive Reliability and Error Analysis Method (CREAM), and so on, these methods have been applied to conventional MCRs, and they do not consider the new features of advance MCRs such as soft controls. As a result, there is an insufficient database for assessing human reliabilities in advanced

  7. EMG Versus Torque Control of Human-Machine Systems: Equalizing Control Signal Variability Does not Equalize Error or Uncertainty.

    Science.gov (United States)

    Johnson, Reva E; Kording, Konrad P; Hargrove, Levi J; Sensinger, Jonathon W

    2017-06-01

    In this paper we asked the question: if we artificially raise the variability of torque control signals to match that of EMG, do subjects make similar errors and have similar uncertainty about their movements? We answered this question using two experiments in which subjects used three different control signals: torque, torque+noise, and EMG. First, we measured error on a simple target-hitting task in which subjects received visual feedback only at the end of their movements. We found that even when the signal-to-noise ratio was equal across EMG and torque+noise control signals, EMG resulted in larger errors. Second, we quantified uncertainty by measuring the just-noticeable difference of a visual perturbation. We found that for equal errors, EMG resulted in higher movement uncertainty than both torque and torque+noise. The differences suggest that performance and confidence are influenced by more than just the noisiness of the control signal, and suggest that other factors, such as the user's ability to incorporate feedback and develop accurate internal models, also have significant impacts on the performance and confidence of a person's actions. We theorize that users have difficulty distinguishing between random and systematic errors for EMG control, and future work should examine in more detail the types of errors made with EMG control.

  8. Development of a framework to estimate human error for diagnosis tasks in advanced control room

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Jang, In Seok; Seong, Proong Hyun

    2014-01-01

    In the emergency situation of nuclear power plants (NPPs), a diagnosis of the occurring events is crucial for managing or controlling the plant to a safe and stable condition. If the operators fail to diagnose the occurring events or relevant situations, their responses can eventually inappropriate or inadequate Accordingly, huge researches have been performed to identify the cause of diagnosis error and estimate the probability of diagnosis error. D.I Gertman et al. asserted that 'the cognitive failures stem from erroneous decision-making, poor understanding of rules and procedures, and inadequate problem solving and this failures may be due to quality of data and people's capacity for processing information'. Also many researchers have asserted that human-system interface (HSI), procedure, training and available time are critical factors to cause diagnosis error. In nuclear power plants, a diagnosis of the event is critical for safe condition of the system. As advanced main control room is being adopted in nuclear power plants, the operators may obtain the plant data via computer-based HSI and procedure. Also many researchers have asserted that HSI, procedure, training and available time are critical factors to cause diagnosis error. In this regards, using simulation data, diagnosis errors and its causes were identified. From this study, some useful insights to reduce diagnosis errors of operators in advanced main control room were provided

  9. MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ashit Chakraborty

    2013-09-01

    Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.

  10. Masked and unmasked error-related potentials during continuous control and feedback

    Science.gov (United States)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the

  11. UNICOS CPC6: Automated Code Generation for Process Control Applications

    CERN Document Server

    Fernandez Adiego, B; Prieto Barreiro, I

    2011-01-01

    The Continuous Process Control package (CPC) is one of the components of the CERN Unified Industrial Control System framework (UNICOS) [1]. As a part of this framework, UNICOS-CPC provides a well defined library of device types, amethodology and a set of tools to design and implement industrial control applications. The new CPC version uses the software factory UNICOS Application Builder (UAB) [2] to develop CPC applications. The CPC component is composed of several platform oriented plugins PLCs and SCADA) describing the structure and the format of the generated code. It uses a resource package where both, the library of device types and the generated file syntax, are defined. The UAB core is the generic part of this software, it discovers and calls dynamically the different plug-ins and provides the required common services. In this paper the UNICOS CPC6 package is introduced. It is composed of several plug-ins: the Instance generator and the Logic generator for both, Siemens and Schneider PLCs, the SCADA g...

  12. A parallel row-based algorithm for standard cell placement with integrated error control

    Science.gov (United States)

    Sargent, Jeff S.; Banerjee, Prith

    1989-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel approaches to control error in parallel cell-placement algorithms: (1) Heuristic Cell-Coloring; (2) Adaptive Sequence Length Control.

  13. Optimal control strategy to reduce the temporal wavefront error in AO systems

    NARCIS (Netherlands)

    Doelman, N.J.; Hinnen, K.J.G.; Stoffelen, F.J.G.; Verhaegen, M.H.

    2004-01-01

    An Adaptive Optics (AO) system for astronomy is analysed from a control point of view. The focus is put on the temporal error. The AO controller is identified as a feedback regulator system, operating in closed-loop with the aim of rejecting wavefront disturbances. Limitations on the performance of

  14. Using Feedback Error Learning for Control of Electro Hydraulic Servo System by Laguerre

    Directory of Open Access Journals (Sweden)

    Amir Reza Zare Bidaki

    2014-01-01

    Full Text Available In this paper, a new Laguerre controller is proposed to control the electro hydraulic servo system. The proposed controller uses feedback error learning method and leads to significantly improve performance in terms of settling time and amplitude of control signal rather than other controllers. All derived results are validated by simulation of nonlinear mathematical model of the system. The simulation results show the advantages of the proposed method for improved control in terms of both settling time and amplitude of control signal.

  15. Computer Simulation Tests of Feedback Error Learning Controller with IDM and ISM for Functional Electrical Stimulation in Wrist Joint Control

    OpenAIRE

    Watanabe, Takashi; Sugi, Yoshihiro

    2010-01-01

    Feedforward controller would be useful for hybrid Functional Electrical Stimulation (FES) system using powered orthotic devices. In this paper, Feedback Error Learning (FEL) controller for FES (FEL-FES controller) was examined using an inverse statics model (ISM) with an inverse dynamics model (IDM) to realize a feedforward FES controller. For FES application, the ISM was tested in learning off line using training data obtained by PID control of very slow movements. Computer simulation tests ...

  16. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  17. The impact of treatment complexity and computer-control delivery technology on treatment delivery errors

    International Nuclear Information System (INIS)

    Fraass, Benedick A.; Lash, Kathy L.; Matrone, Gwynne M.; Volkman, Susan K.; McShan, Daniel L.; Kessler, Marc L.; Lichter, Allen S.

    1998-01-01

    Purpose: To analyze treatment delivery errors for three-dimensional (3D) conformal therapy performed at various levels of treatment delivery automation and complexity, ranging from manual field setup to virtually complete computer-controlled treatment delivery using a computer-controlled conformal radiotherapy system (CCRS). Methods and Materials: All treatment delivery errors which occurred in our department during a 15-month period were analyzed. Approximately 34,000 treatment sessions (114,000 individual treatment segments [ports]) on four treatment machines were studied. All treatment delivery errors logged by treatment therapists or quality assurance reviews (152 in all) were analyzed. Machines 'M1' and 'M2' were operated in a standard manual setup mode, with no record and verify system (R/V). MLC machines 'M3' and 'M4' treated patients under the control of the CCRS system, which (1) downloads the treatment delivery plan from the planning system; (2) performs some (or all) of the machine set up and treatment delivery for each field; (3) monitors treatment delivery; (4) records all treatment parameters; and (5) notes exceptions to the electronically-prescribed plan. Complete external computer control is not available on M3; therefore, it uses as many CCRS features as possible, while M4 operates completely under CCRS control and performs semi-automated and automated multi-segment intensity modulated treatments. Analysis of treatment complexity was based on numbers of fields, individual segments, nonaxial and noncoplanar plans, multisegment intensity modulation, and pseudoisocentric treatments studied for a 6-month period (505 patients) concurrent with the period in which the delivery errors were obtained. Treatment delivery time was obtained from the computerized scheduling system (for manual treatments) or from CCRS system logs. Treatment therapists rotate among the machines; therefore, this analysis does not depend on fixed therapist staff on particular

  18. Negative control exposure studies in the presence of measurement error: implications for attempted effect estimate calibration.

    Science.gov (United States)

    Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George

    2018-04-01

    Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present.

  19. Reactor Systems Technology Division code development and configuration/quality control procedures

    International Nuclear Information System (INIS)

    Johnson, E.C.

    1985-06-01

    Procedures are prescribed for executing a code development task and implementing the resulting coding in an official version of a computer code. The responsibilities of the project manager, development staff members, and the Code Configuration/Quality Control Group are defined. Examples of forms, logs, computer job control language, and suggested outlines for reports associated with software production and implementation are included in Appendix A. 1 raf., 2 figs

  20. Cognitive Impairments in Occupational Burnout – Error Processing and Its Indices of Reactive and Proactive Control

    Directory of Open Access Journals (Sweden)

    Krystyna Golonka

    2017-05-01

    Full Text Available The presented study refers to cognitive aspects of burnout as the effects of long-term work-related stress. The purpose of the study was to investigate electrophysiological correlates of burnout to explain the mechanisms of the core burnout symptoms: exhaustion and depersonalization/cynicism. The analyzed error-related electrophysiological markers shed light on impaired cognitive mechanisms and the specific changes in information-processing in burnout. In the EEG study design (N = 80, two components of error-related potential (ERP, error-related negativity (ERN, and error positivity (Pe, were analyzed. In the non-clinical burnout group (N = 40, a significant increase in ERN amplitude and a decrease in Pe amplitude were observed compared to controls (N = 40. Enhanced error detection, indexed by increased ERN amplitude, and diminished response monitoring, indexed by decreased Pe amplitude, reveal emerging cognitive problems in the non-clinical burnout group. Cognitive impairments in burnout subjects relate to both reactive and unconscious (ERN and proactive and conscious (Pe aspects of error processing. The results indicate a stronger ‘reactive control mode’ that can deplete resources for proactive control and the ability to actively maintain goals. The analysis refers to error processing and specific task demands, thus should not be extended to cognitive processes in general. The characteristics of ERP patterns in burnout resemble psychophysiological indexes of anxiety (increased ERN and depressive symptoms (decreased Pe, showing to some extent an overlapping effect of burnout and related symptoms and disorders. The results support the scarce existing data on the psychobiological nature of burnout, while extending and specifying its cognitive characteristics.

  1. Correcting groove error in gratings ruled on a 500-mm ruling engine using interferometric control.

    Science.gov (United States)

    Mi, Xiaotao; Yu, Haili; Yu, Hongzhu; Zhang, Shanwen; Li, Xiaotian; Yao, Xuefeng; Qi, Xiangdong; Bayinhedhig; Wan, Qiuhua

    2017-07-20

    Groove error is one of the most important factors affecting grating quality and spectral performance. To reduce groove error, we propose a new ruling-tool carriage system based on aerostatic guideways. We design a new blank carriage system with double piezoelectric actuators. We also propose a completely closed-loop servo-control system with a new optical measurement system that can control the position of the diamond relative to the blank. To evaluate our proposed methods, we produced several gratings, including an echelle grating with 79  grooves/mm, a grating with 768  grooves/mm, and a high-density grating with 6000  grooves/mm. The results show that our methods effectively reduce groove error in ruled gratings.

  2. Periodic boundary conditions and the error-controlled fast multipole method

    Energy Technology Data Exchange (ETDEWEB)

    Kabadshow, Ivo

    2012-08-22

    The simulation of pairwise interactions in huge particle ensembles is a vital issue in scientific research. Especially the calculation of long-range interactions poses limitations to the system size, since these interactions scale quadratically with the number of particles. Fast summation techniques like the Fast Multipole Method (FMM) can help to reduce the complexity to O(N). This work extends the possible range of applications of the FMM to periodic systems in one, two and three dimensions with one unique approach. Together with a tight error control, this contribution enables the simulation of periodic particle systems for different applications without the need to know and tune the FMM specific parameters. The implemented error control scheme automatically optimizes the parameters to obtain an approximation for the minimal runtime for a given energy error bound.

  3. Control code for laboratory adaptive optics teaching system

    Science.gov (United States)

    Jin, Moonseob; Luder, Ryan; Sanchez, Lucas; Hart, Michael

    2017-09-01

    By sensing and compensating wavefront aberration, adaptive optics (AO) systems have proven themselves crucial in large astronomical telescopes, retinal imaging, and holographic coherent imaging. Commercial AO systems for laboratory use are now available in the market. One such is the ThorLabs AO kit built around a Boston Micromachines deformable mirror. However, there are limitations in applying these systems to research and pedagogical projects since the software is written with limited flexibility. In this paper, we describe a MATLAB-based software suite to interface with the ThorLabs AO kit by using the MATLAB Engine API and Visual Studio. The software is designed to offer complete access to the wavefront sensor data, through the various levels of processing, to the command signals to the deformable mirror and fast steering mirror. In this way, through a MATLAB GUI, an operator can experiment with every aspect of the AO system's functioning. This is particularly valuable for tests of new control algorithms as well as to support student engagement in an academic environment. We plan to make the code freely available to the community.

  4. Utilizing measure-based feedback in control-mastery theory: A clinical error.

    Science.gov (United States)

    Snyder, John; Aafjes-van Doorn, Katie

    2016-09-01

    Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.

    Science.gov (United States)

    Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan

    2018-04-01

    In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.

  6. Partitioning,Automation and Error Recovery in the Control and Monitoring System of an LHC Experiment

    Institute of Scientific and Technical Information of China (English)

    C.Gaspar

    2001-01-01

    The Joint Controls Project(JCOP)is a collaboration between CERN and the four LHC experiments to find and implement common solutions for their control and monitoring systems.As part of this project and Architecture Working Group was set up in order to study the requirements and devise an architectural model that would suit the four experiments.Many issues were studied by this working group:Alarm handling,Access Control,Hierarchical Control,etc.This paper will report on the specific issue of hierarchical control and in particular partitioning,automation and error recovery.

  7. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    Science.gov (United States)

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  8. Control strategies for active noise barriers using near-field error sensing

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    In this paper active noise control strategies for noise barriers are presented which are based on the use of sensors near the noise barrier. Virtual error signals are derived from these near-field sensor signals such that reductions of the far-field sound pressure are obtained with the active

  9. DESCRIPCIÓN MODULAR DE UN ESQUEMA DE CODIFICACIÓN CONCATENADO PARA CORRECCIÓN DE ERRORES CON PROGRAMACIÓN DE HARDWARE MODULAR DESIGN OF SCHEME CODING CONCATENATED FOR CORRECTION ERROR WITH PROGRAMMING OF HARDWARE

    Directory of Open Access Journals (Sweden)

    Cecilia E Sandoval Ruiz

    2008-09-01

    Full Text Available Las comunicaciones inalámbricas requieren el empleo de métodos de corrección de errores sobre los datos transmitidos, usándose generalmente técnicas de codificación Reed-Solomon & Viterbi, por razones de desempeño y seguridad es preferible implementarlos sobre hardware. En este trabajo se presenta el diseño modular de la etapa de codificación de éstos códigos para su concatenación usando VHDL (VHSIC Hardware Descriptor Language, orientado a la implementación sobre tecnología de matriz de compuertas programadas por campo (FPGA, Se inicia con una revisión de los conceptos asociados a la definición de los componentes, y el modelo, descripción del comportamiento, luego la arquitectura es diseñada usando la sintaxis en VHDL y es capturado el diseño de hardware, finalmente se presentan los resultados de síntesis.The wireless communication medium require employing forward error correction methods on the data transferred, where Reed-Solomon & Viterbi coding techniques are utilized, because of performance and security reaso. In this paper we present a modular design of phase encoding these codes for concatenation using VHDL (VHSIC Hardware Descriptor Language and oriented to implementation with field programmable gate arrays (FPGA. The work besing with a review of code concept and the definition of the components and the model and the description of the behavioral. Later, the architecture is designed and captures using syntax in VHDL, and finally presents the results of synthesis.

  10. Impact of controlling the sum of error probability in the sequential probability ratio test

    Directory of Open Access Journals (Sweden)

    Bijoy Kumarr Pradhan

    2013-05-01

    Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.

  11. Correcting errors in a quantum gate with pushed ions via optimal control

    International Nuclear Information System (INIS)

    Poulsen, Uffe V.; Sklarz, Shlomo; Tannor, David; Calarco, Tommaso

    2010-01-01

    We analyze in detail the so-called pushing gate for trapped ions, introducing a time-dependent harmonic approximation for the external motion. We show how to extract the average fidelity for the gate from the resulting semiclassical simulations. We characterize and quantify precisely all types of errors coming from the quantum dynamics and reveal that slight nonlinearities in the ion-pushing force can have a dramatic effect on the adiabaticity of gate operation. By means of quantum optimal control techniques, we show how to suppress each of the resulting gate errors in order to reach a high fidelity compatible with scalable fault-tolerant quantum computing.

  12. Correcting errors in a quantum gate with pushed ions via optimal control

    DEFF Research Database (Denmark)

    Poulsen, Uffe Vestergaard; Sklarz, Shlomo; Tannor, David

    2010-01-01

    We analyze in detail the so-called pushing gate for trapped ions, introducing a time-dependent harmonic approximation for the external motion. We show how to extract the average fidelity for the gate from the resulting semiclassical simulations. We characterize and quantify precisely all types...... of errors coming from the quantum dynamics and reveal that slight nonlinearities in the ion-pushing force can have a dramatic effect on the adiabaticity of gate operation. By means of quantum optimal control techniques, we show how to suppress each of the resulting gate errors in order to reach a high...

  13. Training and support to improve ICD coding quality: A controlled ...

    African Journals Online (AJOL)

    [1] In response to this challenge, the Western Cape Government: Health (WCGH) ... commit to costly, time-consuming accredited ICD coding courses, although ..... being blinded to the retrospective 'assignment' of patients to the intervention ...

  14. Electronic laboratory system reduces errors in National Tuberculosis Program: a cluster randomized controlled trial.

    Science.gov (United States)

    Blaya, J A; Shin, S S; Yale, G; Suarez, C; Asencios, L; Contreras, C; Rodriguez, P; Kim, J; Cegielski, P; Fraser, H S F

    2010-08-01

    To evaluate the impact of the e-Chasqui laboratory information system in reducing reporting errors compared to the current paper system. Cluster randomized controlled trial in 76 health centers (HCs) between 2004 and 2008. Baseline data were collected every 4 months for 12 months. HCs were then randomly assigned to intervention (e-Chasqui) or control (paper). Further data were collected for the same months the following year. Comparisons were made between intervention and control HCs, and before and after the intervention. Intervention HCs had respectively 82% and 87% fewer errors in reporting results for drug susceptibility tests (2.1% vs. 11.9%, P = 0.001, OR 0.17, 95%CI 0.09-0.31) and cultures (2.0% vs. 15.1%, P Chasqui users sent on average three electronic error reports per week to the laboratories. e-Chasqui reduced the number of missing laboratory results at point-of-care health centers. Clinical users confirmed viewing electronic results not available on paper. Reporting errors to the laboratory using e-Chasqui promoted continuous quality improvement. The e-Chasqui laboratory information system is an important part of laboratory infrastructure improvements to support multidrug-resistant tuberculosis care in Peru.

  15. Analysis of Wind Speed Forecasting Error Effects on Automatic Generation Control Performance

    Directory of Open Access Journals (Sweden)

    H. Rajabi Mashhadi

    2014-09-01

    Full Text Available The main goal of this paper is to study statistical indices and evaluate AGC indices in power system which has large penetration of the WTGs. Increasing penetration of wind turbine generations, needs to study more about impacts of it on power system frequency control. Frequency control is changed with unbalancing real-time system generation and load . Also wind turbine generations have more fluctuations and make system more unbalance. Then AGC loop helps to adjust system frequency and the scheduled tie-line powers. The quality of AGC loop is measured by some indices. A good index is a proper measure shows the AGC performance just as the power system operates. One of well-known measures in literature which was introduced by NERC is Control Performance Standards(CPS. Previously it is claimed that a key factor in CPS index is related to standard deviation of generation error, installed power and frequency response. This paper focuses on impact of a several hours-ahead wind speed forecast error on this factor. Furthermore evaluation of conventional control performances in the power systems with large-scale wind turbine penetration is studied. Effects of wind speed standard deviation and also degree of wind farm penetration are analyzed and importance of mentioned factor are criticized. In addition, influence of mean wind speed forecast error on this factor is investigated. The study system is a two area system which there is significant wind farm in one of those. The results show that mean wind speed forecast error has considerable effect on AGC performance while the mentioned key factor is insensitive to this mean error.

  16. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yunlong; Wang, Aiping; Guo, Lei; Wang, Hong

    2017-07-09

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  17. Comparing Interval Management Control Laws for Steady-State Errors and String Stability

    Science.gov (United States)

    Weitz, Lesley A.; Swieringa, Kurt A.

    2018-01-01

    Interval Management (IM) is a future airborne spacing concept that leverages avionics to provide speed guidance to an aircraft to achieve and maintain a specified spacing interval from another aircraft. The design of a speed control law to achieve the spacing goal is a key aspect in the research and development of the IM concept. In this paper, two control laws that are used in much of the contemporary IM research are analyzed and compared to characterize steady-state errors and string stability. Numerical results are used to illustrate how the choice of control laws gains impacts the size of steady-state errors and string performance and the potential trade-offs between those performance characteristics.

  18. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    Science.gov (United States)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  19. Response Accuracy and Tracking Errors with Decentralized Control of Commercial V2G Chargers

    DEFF Research Database (Denmark)

    Ziras, Charalampos; Zecchino, Antonio; Marinelli, Mattia

    2018-01-01

    There is a growing interest in using the flexibility of electric vehicles (EVs) to provide power system services, such as fast frequency regulation. Decentralized control is advocated due to its reliability and much lower communication requirements. A commonly used linear droop characteristic...... results in low average efficiencies, whereas controllers with 3 modes (idle, fully charging, fully discharging) result in large reserve errors when the aggregation size is small. To address these issues, we propose a stochastic, decentralized controller with tunable response granularity which minimizes...... switching actions. The EV fleet operator can optimize the chargers’ performance according to the fleet size, the service error requirements, the average switching rate and the average efficiency. We use real efficiency characteristics from EVs and chargers providing fast frequency regulation and we show...

  20. Making the error-controlling algorithm of observable operator models constructive.

    Science.gov (United States)

    Zhao, Ming-Jie; Jaeger, Herbert; Thon, Michael

    2009-12-01

    Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.

  1. Computer codes for the operational control of the research reactors

    International Nuclear Information System (INIS)

    Kalker, K.J.; Nabbi, R.; Bormann, H.J.

    1986-01-01

    Four small computer codes developed by ZFR are presented, which have been used for several years during operation of the research reactors FRJ-1, FRJ-2, AVR (all in Juelich) and DR-2 (Riso, Denmark). Because of interest coming from the other reactor stations the codes are documented within the frame work of the IAEA Research Contract No. 3634/FG. The zero-dimensional burnup program CREMAT is used for reactor cores in which flux measurements at each individual fuel element are carried out during operation. The program yields burnup data for each fuel element and for the whole core. On the basis of these data, fuel reloading is prepared for the next operational period under consideration of the permitted minimum shut down reactivity of the system. The program BURNY calculates burnup for fuel elements inaccessible for flux measurements, but for which 'position weighting factors' have been measured/calculated during zero power operation of the core, and which are assumed to be constant in all operational situations. The code CURIAX calculates post-irradiation data for discharged fuel elements needed in their manipulation and transport. These three programs have been written for highly enriched fuel and take into account U-235 only. The modification of CREMAT for LEU Cores and its combiantion with ORIGEN is in preparation. KINIK is an inverse kinetic code and widely used for absorber rod calibration at the abovementioned research reactors. It includes a special polynomial subroutine which can easily be used in other codes. (orig.) [de

  2. Implementation of an operator model with error mechanisms for nuclear power plant control room operation

    International Nuclear Information System (INIS)

    Suh, Sang Moon; Cheon, Se Woo; Lee, Yong Hee; Lee, Jung Woon; Park, Young Taek

    1996-01-01

    SACOM(Simulation Analyser with Cognitive Operator Model) is being developed at Korea Atomic Energy Research Institute to simulate human operator's cognitive characteristics during the emergency situations of nuclear power plans. An operator model with error mechanisms has been developed and combined into SACOM to simulate human operator's cognitive information process based on the Rasmussen's decision ladder model. The operational logic for five different cognitive activities (Agents), operator's attentional control (Controller), short-term memory (Blackboard), and long-term memory (Knowledge Base) have been developed and implemented on blackboard architecture. A trial simulation with a scenario for emergency operation has been performed to verify the operational logic. It was found that the operator model with error mechanisms is suitable for the simulation of operator's cognitive behavior in emergency situation

  3. Computer Simulation Tests of Feedback Error Learning Controller with IDM and ISM for Functional Electrical Stimulation in Wrist Joint Control

    Directory of Open Access Journals (Sweden)

    Takashi Watanabe

    2010-01-01

    Full Text Available Feedforward controller would be useful for hybrid Functional Electrical Stimulation (FES system using powered orthotic devices. In this paper, Feedback Error Learning (FEL controller for FES (FEL-FES controller was examined using an inverse statics model (ISM with an inverse dynamics model (IDM to realize a feedforward FES controller. For FES application, the ISM was tested in learning off line using training data obtained by PID control of very slow movements. Computer simulation tests in controlling wrist joint movements showed that the ISM performed properly in positioning task and that IDM learning was improved by using the ISM showing increase of output power ratio of the feedforward controller. The simple ISM learning method and the FEL-FES controller using the ISM would be useful in controlling the musculoskeletal system that has nonlinear characteristics to electrical stimulation and therefore is expected to be useful in applying to hybrid FES system using powered orthotic device.

  4. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  5. Patterning control strategies for minimum edge placement error in logic devices

    Science.gov (United States)

    Mulkens, Jan; Hanna, Michael; Slachter, Bram; Tel, Wim; Kubis, Michael; Maslow, Mark; Spence, Chris; Timoshkov, Vadim

    2017-03-01

    In this paper we discuss the edge placement error (EPE) for multi-patterning semiconductor manufacturing. In a multi-patterning scheme the creation of the final pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. We describe the fidelity of the final pattern in terms of EPE, which is defined as the relative displacement of the edges of two features from their intended target position. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As an experimental test vehicle we use the 7-nm logic device patterning process flow as developed by IMEC. This patterning process is based on Self-Aligned-Quadruple-Patterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography. The computational metrology method to determine EPE is explained. It will be shown that ArF to EUV overlay, CDU from the individual process steps, and local CD and placement of the individual pattern features, are the important contributors. Based on the error budget, we developed an optimization strategy for each individual step and for the final pattern. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.

  6. A controlled evaluation of case clinical effect coding by poison center specialists for detection of WMD scenarios.

    Science.gov (United States)

    Beuhler, Michael C; Wittler, Mary A; Ford, Marsha; Dulaney, Anna R

    2011-08-01

    Many public health entities employ computer-based syndromic surveillance to monitor for aberrations including possible exposures to weapons of mass destruction (WMD). Often, this is done by screening signs and symptoms reported for cases against syndromic definitions. Poison centers (PCs) may offer significant contributions to public health surveillance because of their detailed clinical effect data field coding and real-time data entry. Because improper clinical effect coding may impede syndromic surveillance, it is important to assess this accuracy for PCs. An AAPCC-certified regional PC assessed the accuracy of clinical effect coding by specialists in poison information (SPIs) listening to audio recordings of standard cases. Eighteen different standardized cases were used, consisting of six cyanide, six botulism, and six control cases. Cases were scripted to simulate clinically relevant telephone conversations and converted to audio recordings. Ten SPIs were randomly selected from the center's staff to listen to and code case information from the recorded cases. Kappa scores and the percentage of correctly coding a present clinical effect were calculated for individual clinical effects summed over all test cases along with corresponding 95% confidence intervals. The rate of the case coding by the SPIs triggering the PC's automated botulism and cyanide alerts was also determined. The kappa scores and the percentage of correctly coding a present clinical effect varied depending on the specific clinical effect, with greater accuracy observed for the clinical effects of vomiting and agitation/irritability, and poor accuracy observed for the clinical effects of visual defect and anion gap increase. Lack of correct coding resulted in only 60 and 86% of the cases that met the botulism and cyanide surveillance definitions, respectively, triggering the corresponding alert. There was no difference observed in the percentage of coding a present clinical effect between

  7. Tracking error constrained robust adaptive neural prescribed performance control for flexible hypersonic flight vehicle

    Directory of Open Access Journals (Sweden)

    Zhonghua Wu

    2017-02-01

    Full Text Available A robust adaptive neural control scheme based on a back-stepping technique is developed for the longitudinal dynamics of a flexible hypersonic flight vehicle, which is able to ensure the state tracking error being confined in the prescribed bounds, in spite of the existing model uncertainties and actuator constraints. Minimal learning parameter technique–based neural networks are used to estimate the model uncertainties; thus, the amount of online updated parameters is largely lessened, and the prior information of the aerodynamic parameters is dispensable. With the utilization of an assistant compensation system, the problem of actuator constraint is overcome. By combining the prescribed performance function and sliding mode differentiator into the neural back-stepping control design procedure, a composite state tracking error constrained adaptive neural control approach is presented, and a new type of adaptive law is constructed. As compared with other adaptive neural control designs for hypersonic flight vehicle, the proposed composite control scheme exhibits not only low-computation property but also strong robustness. Finally, two comparative simulations are performed to demonstrate the robustness of this neural prescribed performance controller.

  8. Error Control Techniques for Efficient Multicast Streaming in UMTS Networks: Proposals andPerformance Evaluation

    Directory of Open Access Journals (Sweden)

    Michele Rossi

    2004-06-01

    Full Text Available In this paper we introduce techniques for efficient multicast video streaming in UMTS networks where a video content has to be conveyed to multiple users in the same cell. Efficient multicast data delivery in UMTS is still an open issue. In particular, suitable solutions have to be found to cope with wireless channel errors, while maintaining both an acceptable channel utilization and a controlled delivery delay over the wireless link between the serving base station and the mobile terminals. Here, we first highlight that standard solutions such as unequal error protection (UEP of the video flow are ineffective in the UMTS systems due to its inherent large feedback delay at the link layer (Radio Link Control, RLC. Subsequently, we propose a local approach to solve errors directly at the UMTS link layer while keeping a reasonably high channel efficiency and saving, as much as possible, system resources. The solution that we propose in this paper is based on the usage of the common channel to serve all the interested users in a cell. In this way, we can save resources with respect to the case where multiple dedicated channels are allocated for every user. In addition to that, we present a hybrid ARQ (HARQ proactive protocol that, at the cost of some redundancy (added to the link layer flow, is able to consistently improve the channel efficiency with respect to the plain ARQ case, by therefore making the use of a single common channel for multicast data delivery feasible. In the last part of the paper we give some hints for future research, by envisioning the usage of the aforementioned error control protocols with suitably encoded video streams.

  9. VACOSS - variable coding seal system for nuclear material control

    International Nuclear Information System (INIS)

    Kennepohl, K.; Stein, G.

    1977-12-01

    VACOSS - Variable Coding Seal System - is intended to seal: rooms and containers with nuclear material, nuclear instrumentation and equipment of the operator, instrumentation and equipment at the supervisory authority. It is easy to handle, reusable, transportable and consists of three components: 1. Seal. The light guide in fibre optics with infrared light emitter and receiver serves as lead. The statistical treatment of coded data given in the seal via adapter box guarantees an extremely high degree of access reliability. It is possible to store the data of two undue seal openings together with data concerning time and duration of the opening. 2. The adapter box can be used for input or input and output of data indicating the seal integrity. 3. The simulation programme is located in the computing center of the supervisory authority and permits to determine date and time of opening by decoding the seal memory data. (orig./WB) [de

  10. Comercialización del sistema control de errores y versiones de software

    OpenAIRE

    Vargas Caicedo, Hilda Elisa; Rodriguez Loor, Carol Vanessa; Gaibor, Gustavo

    2009-01-01

    El principal objetivo es proveer soluciones en el ámbito de tecnología de información a las necesidades de las empresas para lograr una eficiente administración de los procesos, apoyados en la innovación y optimización continua. Nuestra propuesta, será enfocada a ofrecer un sistema que combina las funciones de control de errores, helpdesk y control de versiones de software a un buen precio. Se indicarán aspectos como las estrategias de venta, de promoción, ingresos y egresos. Aparte de l...

  11. Projective Synchronization of Chaotic Discrete Dynamical Systems via Linear State Error Feedback Control

    Directory of Open Access Journals (Sweden)

    Baogui Xin

    2015-04-01

    Full Text Available A projective synchronization scheme for a kind of n-dimensional discrete dynamical system is proposed by means of a linear feedback control technique. The scheme consists of master and slave discrete dynamical systems coupled by linear state error variables. A kind of novel 3-D chaotic discrete system is constructed, to which the test for chaos is applied. By using the stability principles of an upper or lower triangular matrix, two controllers for achieving projective synchronization are designed and illustrated with the novel systems. Lastly some numerical simulations are employed to validate the effectiveness of the proposed projective synchronization scheme.

  12. Potential loss of revenue due to errors in clinical coding during the implementation of the Malaysia diagnosis related group (MY-DRG®) Casemix system in a teaching hospital in Malaysia.

    Science.gov (United States)

    Zafirah, S A; Nur, Amrizal Muhammad; Puteh, Sharifa Ezat Wan; Aljunid, Syed Mohamed

    2018-01-25

    The accuracy of clinical coding is crucial in the assignment of Diagnosis Related Groups (DRGs) codes, especially if the hospital is using Casemix System as a tool for resource allocations and efficiency monitoring. The aim of this study was to estimate the potential loss of income due to an error in clinical coding during the implementation of the Malaysia Diagnosis Related Group (MY-DRG ® ) Casemix System in a teaching hospital in Malaysia. Four hundred and sixty-four (464) coded medical records were selected, re-examined and re-coded by an independent senior coder (ISC). This ISC re-examined and re-coded the error code that was originally entered by the hospital coders. The pre- and post-coding results were compared, and if there was any disagreement, the codes by the ISC were considered the accurate codes. The cases were then re-grouped using a MY-DRG ® grouper to assess and compare the changes in the DRG assignment and the hospital tariff assignment. The outcomes were then verified by a casemix expert. Coding errors were found in 89.4% (415/424) of the selected patient medical records. Coding errors in secondary diagnoses were the highest, at 81.3% (377/464), followed by secondary procedures at 58.2% (270/464), principal procedures of 50.9% (236/464) and primary diagnoses at 49.8% (231/464), respectively. The coding errors resulted in the assignment of different MY-DRG ® codes in 74.0% (307/415) of the cases. From this result, 52.1% (160/307) of the cases had a lower assigned hospital tariff. In total, the potential loss of income due to changes in the assignment of the MY-DRG ® code was RM654,303.91. The quality of coding is a crucial aspect in implementing casemix systems. Intensive re-training and the close monitoring of coder performance in the hospital should be performed to prevent the potential loss of hospital income.

  13. Basic human error probabilities in advanced MCRs when using soft control

    International Nuclear Information System (INIS)

    Jang, In Seok; Seong, Poong Hyun; Kang, Hyun Gook; Lee, Seung Jun

    2012-01-01

    In a report on one of the renowned HRA methods, Technique for Human Error Rate Prediction (THERP), it is pointed out that 'The paucity of actual data on human performance continues to be a major problem for estimating HEPs and performance times in nuclear power plant (NPP) task'. However, another critical difficulty is that most current HRA databases deal with operation in conventional type of MCRs. With the adoption of new human system interfaces that are based on computer based technologies, the operation environment of MCRs in NPPs has changed. The MCRs including these digital and computer technologies, such as large display panels, computerized procedures, soft controls, and so on, are called advanced MCRs. Because of the different interfaces, different Basic Human Error Probabilities (BHEPs) should be considered in human reliability analyses (HRAs) for advanced MCRs. This study carries out an empirical analysis of human error considering soft controls. The aim of this work is not only to compile a database using the simulator for advanced MCRs but also to compare BHEPs with those of a conventional MCR database

  14. Motivational state controls the prediction error in Pavlovian appetitive-aversive interactions.

    Science.gov (United States)

    Laurent, Vincent; Balleine, Bernard W; Westbrook, R Frederick

    2018-01-01

    Contemporary theories of learning emphasize the role of a prediction error signal in driving learning, but the nature of this signal remains hotly debated. Here, we used Pavlovian conditioning in rats to investigate whether primary motivational and emotional states interact to control prediction error. We initially generated cues that positively or negatively predicted an appetitive food outcome. We then assessed how these cues modulated aversive conditioning when a novel cue was paired with a foot shock. We found that a positive predictor of food enhances, whereas a negative predictor of that same food impairs, aversive conditioning. Critically, we also showed that the enhancement produced by the positive predictor is removed by reducing the value of its associated food. In contrast, the impairment triggered by the negative predictor remains insensitive to devaluation of its associated food. These findings provide compelling evidence that the motivational value attributed to a predicted food outcome can directly control appetitive-aversive interactions and, therefore, that motivational processes can modulate emotional processes to generate the final error term on which subsequent learning is based. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Development status of TUF code

    International Nuclear Information System (INIS)

    Liu, W.S.; Tahir, A.; Zaltsgendler

    1996-01-01

    An overview of the important development of the TUF code in 1995 is presented. The development in the following areas is presented: control of round-off error propagation, gas resolution and release models, and condensation induced water hammer. This development is mainly generated from station requests for operational support and code improvement. (author)

  16. Error characterization and quantum control benchmarking in liquid state NMR using quantum information processing techniques

    Science.gov (United States)

    Laforest, Martin

    Quantum information processing has been the subject of countless discoveries since the early 1990's. It is believed to be the way of the future for computation: using quantum systems permits one to perform computation exponentially faster than on a regular classical computer. Unfortunately, quantum systems that not isolated do not behave well. They tend to lose their quantum nature due to the presence of the environment. If key information is known about the noise present in the system, methods such as quantum error correction have been developed in order to reduce the errors introduced by the environment during a given quantum computation. In order to harness the quantum world and implement the theoretical ideas of quantum information processing and quantum error correction, it is imperative to understand and quantify the noise present in the quantum processor and benchmark the quality of the control over the qubits. Usual techniques to estimate the noise or the control are based on quantum process tomography (QPT), which, unfortunately, demands an exponential amount of resources. This thesis presents work towards the characterization of noisy processes in an efficient manner. The protocols are developed from a purely abstract setting with no system-dependent variables. To circumvent the exponential nature of quantum process tomography, three different efficient protocols are proposed and experimentally verified. The first protocol uses the idea of quantum error correction to extract relevant parameters about a given noise model, namely the correlation between the dephasing of two qubits. Following that is a protocol using randomization and symmetrization to extract the probability that a given number of qubits are simultaneously corrupted in a quantum memory, regardless of the specifics of the error and which qubits are affected. Finally, a last protocol, still using randomization ideas, is developed to estimate the average fidelity per computational gates for

  17. A GPU accelerated and error-controlled solver for the unbounded Poisson equation in three dimensions

    Science.gov (United States)

    Exl, Lukas

    2017-12-01

    An efficient solver for the three dimensional free-space Poisson equation is presented. The underlying numerical method is based on finite Fourier series approximation. While the error of all involved approximations can be fully controlled, the overall computation error is driven by the convergence of the finite Fourier series of the density. For smooth and fast-decaying densities the proposed method will be spectrally accurate. The method scales with O(N log N) operations, where N is the total number of discretization points in the Cartesian grid. The majority of the computational costs come from fast Fourier transforms (FFT), which makes it ideal for GPU computation. Several numerical computations on CPU and GPU validate the method and show efficiency and convergence behavior. Tests are performed using the Vienna Scientific Cluster 3 (VSC3). A free MATLAB implementation for CPU and GPU is provided to the interested community.

  18. Fast and error-resilient coherent control in an atomic vapor

    Science.gov (United States)

    He, Yizun; Wang, Mengbing; Zhao, Jian; Qiu, Liyang; Wang, Yuzhuo; Fang, Yami; Zhao, Kaifeng; Wu, Saijun

    2017-04-01

    Nanosecond chirped pulses from an optical arbitrary waveform generator is applied to both invert and coherently split the D1 line population of potassium vapor within a laser focal volume of 2X105 μ m3. The inversion fidelity of f>96%, mainly limited by spontaneous emission during the nanosecond pulse, is inferred from both probe light transmission and superfluorescence emission. The nearly perfect inversion is uniformly achieved for laser intensity varying over an order of magnitude, and is tolerant to detuning error of more than 1000 times the D1 transition linewidth. We further demonstrate enhanced intensity error resilience with multiple chirped pulses and ``universal composite pulses''. This fast and robust coherent control technique should find wide applications in the field of quantum optics, laser cooling, and atom interferometry. This work is supported by National Key Research Program of China under Grant No. 2016YFA0302000, and NNSFC under Grant No. 11574053.

  19. Variations in Static Force Control and Motor Unit Behavior with Error Amplification Feedback in the Elderly

    Directory of Open Access Journals (Sweden)

    Yi-Ching Chen

    2017-11-01

    Full Text Available Error amplification (EA feedback is a promising approach to advance visuomotor skill. As error detection and visuomotor processing at short time scales decline with age, this study examined whether older adults could benefit from EA feedback that included higher-frequency information to guide a force-tracking task. Fourteen young and 14 older adults performed low-level static isometric force-tracking with visual guidance of typical visual feedback and EA feedback containing augmented high-frequency errors. Stabilogram diffusion analysis was used to characterize force fluctuation dynamics. Also, the discharge behaviors of motor units and pooled motor unit coherence were assessed following the decomposition of multi-channel surface electromyography (EMG. EA produced different behavioral and neurophysiological impacts on young and older adults. Older adults exhibited inferior task accuracy with EA feedback than with typical visual feedback, but not young adults. Although stabilogram diffusion analysis revealed that EA led to a significant decrease in critical time points for both groups, EA potentiated the critical point of force fluctuations <ΔFc2>, short-term effective diffusion coefficients (Ds, and short-term exponent scaling only for the older adults. Moreover, in older adults, EA added to the size of discharge variability of motor units and discharge regularity of cumulative discharge rate, but suppressed the pooled motor unit coherence in the 13–35 Hz band. Virtual EA alters the strategic balance between open-loop and closed-loop controls for force-tracking. Contrary to expectations, the prevailing use of closed-loop control with EA that contained high-frequency error information enhanced the motor unit discharge variability and undermined the force steadiness in the older group, concerning declines in physiological complexity in the neurobehavioral system and the common drive to the motoneuronal pool against force destabilization.

  20. CoCo trial: Color-coded blood pressure Control, a randomized controlled study

    Directory of Open Access Journals (Sweden)

    Chmiel C

    2014-10-01

    Full Text Available Corinne Chmiel, Oliver Senn, Thomas Rosemann, Valerio Del Prete, Claudia Steurer-Stey Institute of General Practice and Health Services Research, University of Zurich, Zurich, Switzerland Background: Inadequate blood pressure (BP control is a frequent challenge in general practice. The objective of this study was to determine whether a color-coded BP booklet using a traffic light scheme (red, >180 mmHg systolic BP and/or >110 mmHg diastolic BP; yellow, >140–180 mmHg systolic BP or >90–110 mmHg diastolic BP; green, ≤140 mmHg systolic BP and ≤90 mmHg diastolic BP improves BP control and adherence with home BP measurement.Methods: In this two-group, randomized controlled trial, general practitioners recruited adult patients with a BP >140 mmHg systolic and/or >90 mmHg diastolic. Patients in the control group received a standard BP booklet and the intervention group used a color-coded booklet for daily home BP measurement. The main outcomes were changes in BP, BP control (treatment goal <140/90 mmHg, and adherence with home BP measurement after 6 months.Results: One hundred and twenty-one of 137 included patients qualified for analysis. After 6 months, a significant decrease in systolic and diastolic BP was achieved in both groups, with no significant difference between the groups (16.1/7.9 mmHg in the intervention group versus 13.1/8.6 mmHg in the control group, P=0.3/0.7. BP control (treatment target <140/90 mmHg was achieved significantly more often in the intervention group (43% versus 25%; P=0.037; number needed to treat of 5. Adherence with home BP measurement overall was high, with a trend in favor of the intervention group (98.6% versus 96.2%; P=0.1Conclusion: Color-coded BP self-monitoring significantly improved BP control (number needed to treat of 5, meaning that every fifth patient utilizing color-coded self-monitoring achieved better BP control after 6 months, but no significant between-group difference was

  1. User's guide for vectorized code EQUIL for calculating equilibrium chemistry on Control Data STAR-100 computer

    Science.gov (United States)

    Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.

    1980-01-01

    A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.

  2. Sensorless SPMSM Position Estimation Using Position Estimation Error Suppression Control and EKF in Wide Speed Range

    Directory of Open Access Journals (Sweden)

    Zhanshan Wang

    2014-01-01

    Full Text Available The control of a high performance alternative current (AC motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM. In order to guarantee the accuracy of rotor position estimation in the flux-weakening region, one scheme of identifying the permanent magnet flux of SPMSM by extended Kalman filter (EKF is also proposed, which formed the effective combination method to realize the sensorless control of SPMSM with high accuracy. The simulation results demonstrated the validity and feasibility of the proposed position/speed estimation system.

  3. How the credit assignment problems in motor control could be solved after the cerebellum predicts increases in error.

    Science.gov (United States)

    Verduzco-Flores, Sergio O; O'Reilly, Randall C

    2015-01-01

    We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error and distal learning problems. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations.

  4. How the credit assignment problems in motor control could be solved after the cerebellum predicts increases in error

    Directory of Open Access Journals (Sweden)

    Sergio Oscar Verduzco-Flores

    2015-03-01

    Full Text Available We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error and distal learning problems. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations.

  5. On the Performance Analysis of Hybrid ARQ With Incremental Redundancy and With Code Combining Over Free-Space Optical Channels With Pointing Errors

    KAUST Repository

    Zedini, Emna; Chelli, Ali; Alouini, Mohamed-Slim

    2014-01-01

    In this paper, we investigate the performance of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) and with code combining (CC) from an information-theoretic perspective over a point-to-point free-space optical (FSO) system. First, we introduce new closed-form expressions for the probability density function, the cumulative distribution function, the moment generating function, and the moments of an FSO link modeled by the Gamma fading channel subject to pointing errors and using intensity modulation with direct detection technique at the receiver. Based on these formulas, we derive exact results for the average bit error rate and the capacity in terms of Meijer's G functions. Moreover, we present asymptotic expressions by utilizing the Meijer's G function expansion and using the moments method, too, for the ergodic capacity approximations. Then, we provide novel analytical expressions for the outage probability, the average number of transmissions, and the average transmission rate for HARQ with IR, assuming a maximum number of rounds for the HARQ protocol. Besides, we offer asymptotic expressions for these results in terms of simple elementary functions. Additionally, we compare the performance of HARQ with IR and HARQ with CC. Our analysis demonstrates that HARQ with IR outperforms HARQ with CC.

  6. On the Performance Analysis of Hybrid ARQ With Incremental Redundancy and With Code Combining Over Free-Space Optical Channels With Pointing Errors

    KAUST Repository

    Zedini, Emna

    2014-07-16

    In this paper, we investigate the performance of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) and with code combining (CC) from an information-theoretic perspective over a point-to-point free-space optical (FSO) system. First, we introduce new closed-form expressions for the probability density function, the cumulative distribution function, the moment generating function, and the moments of an FSO link modeled by the Gamma fading channel subject to pointing errors and using intensity modulation with direct detection technique at the receiver. Based on these formulas, we derive exact results for the average bit error rate and the capacity in terms of Meijer\\'s G functions. Moreover, we present asymptotic expressions by utilizing the Meijer\\'s G function expansion and using the moments method, too, for the ergodic capacity approximations. Then, we provide novel analytical expressions for the outage probability, the average number of transmissions, and the average transmission rate for HARQ with IR, assuming a maximum number of rounds for the HARQ protocol. Besides, we offer asymptotic expressions for these results in terms of simple elementary functions. Additionally, we compare the performance of HARQ with IR and HARQ with CC. Our analysis demonstrates that HARQ with IR outperforms HARQ with CC.

  7. Investigation on iterative multiuser detection physical layer network coding in two-way relay free-space optical links with turbulences and pointing errors.

    Science.gov (United States)

    Abu-Almaalie, Zina; Ghassemlooy, Zabih; Bhatnagar, Manav R; Le-Minh, Hoa; Aslam, Nauman; Liaw, Shien-Kuei; Lee, It Ee

    2016-11-20

    Physical layer network coding (PNC) improves the throughput in wireless networks by enabling two nodes to exchange information using a minimum number of time slots. The PNC technique is proposed for two-way relay channel free space optical (TWR-FSO) communications with the aim of maximizing the utilization of network resources. The multipair TWR-FSO is considered in this paper, where a single antenna on each pair seeks to communicate via a common receiver aperture at the relay. Therefore, chip interleaving is adopted as a technique to separate the different transmitted signals at the relay node to perform PNC mapping. Accordingly, this scheme relies on the iterative multiuser technique for detection of users at the receiver. The bit error rate (BER) performance of the proposed system is examined under the combined influences of atmospheric loss, turbulence-induced channel fading, and pointing errors (PEs). By adopting the joint PNC mapping with interleaving and multiuser detection techniques, the BER results show that the proposed scheme can achieve a significant performance improvement against the degrading effects of turbulences and PEs. It is also demonstrated that a larger number of simultaneous users can be supported with this new scheme in establishing a communication link between multiple pairs of nodes in two time slots, thereby improving the channel capacity.

  8. Current error vector based prediction control of the section winding permanent magnet linear synchronous motor

    Energy Technology Data Exchange (ETDEWEB)

    Hong Junjie, E-mail: hongjjie@mail.sysu.edu.cn [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China); Li Liyi, E-mail: liliyi@hit.edu.cn [Dept. Electrical Engineering, Harbin Institute of Technology, Harbin 150000 (China); Zong Zhijian; Liu Zhongtu [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China)

    2011-10-15

    Highlights: {yields} The structure of the permanent magnet linear synchronous motor (SW-PMLSM) is new. {yields} A new current control method CEVPC is employed in this motor. {yields} The sectional power supply method is different to the others and effective. {yields} The performance gets worse with voltage and current limitations. - Abstract: To include features such as greater thrust density, higher efficiency without reducing the thrust stability, this paper proposes a section winding permanent magnet linear synchronous motor (SW-PMLSM), whose iron core is continuous, whereas winding is divided. The discrete system model of the motor is derived. With the definition of the current error vector and selection of the value function, the theory of the current error vector based prediction control (CEVPC) for the motor currents is explained clearly. According to the winding section feature, the motion region of the mover is divided into five zones, in which the implementation of the current predictive control method is proposed. Finally, the experimental platform is constructed and experiments are carried out. The results show: the current control effect has good dynamic response, and the thrust on the mover remains constant basically.

  9. Quality controls in integrative approaches to detect errors and inconsistencies in biological databases

    Directory of Open Access Journals (Sweden)

    Ghisalberti Giorgio

    2010-12-01

    Full Text Available Numerous biomolecular data are available, but they are scattered in many databases and only some of them are curated by experts. Most available data are computationally derived and include errors and inconsistencies. Effective use of available data in order to derive new knowledge hence requires data integration and quality improvement. Many approaches for data integration have been proposed. Data warehousing seams to be the most adequate when comprehensive analysis of integrated data is required. This makes it the most suitable also to implement comprehensive quality controls on integrated data. We previously developed GFINDer (http://www.bioinformatics.polimi.it/GFINDer/, a web system that supports scientists in effectively using available information. It allows comprehensive statistical analysis and mining of functional and phenotypic annotations of gene lists, such as those identified by high-throughput biomolecular experiments. GFINDer backend is composed of a multi-organism genomic and proteomic data warehouse (GPDW. Within the GPDW, several controlled terminologies and ontologies, which describe gene and gene product related biomolecular processes, functions and phenotypes, are imported and integrated, together with their associations with genes and proteins of several organisms. In order to ease maintaining updated the GPDW and to ensure the best possible quality of data integrated in subsequent updating of the data warehouse, we developed several automatic procedures. Within them, we implemented numerous data quality control techniques to test the integrated data for a variety of possible errors and inconsistencies. Among other features, the implemented controls check data structure and completeness, ontological data consistency, ID format and evolution, unexpected data quantification values, and consistency of data from single and multiple sources. We use the implemented controls to analyze the quality of data available from several

  10. Cerebellar Codings for Control of Compensatory Eye Movements

    NARCIS (Netherlands)

    M. Schonewille (Martijn)

    2008-01-01

    textabstractThis thesis focuses on the control of the cerebellum on motor behaviour, and more specifically on the role of the cerebellar Purkinje cells in exerting this control. As the cerebellum is an online control system, we look at both motor performance and learning, trying to identify

  11. Detecting errors and anomalies in computerized materials control and accountability databases

    International Nuclear Information System (INIS)

    Whiteson, R.; Hench, K.; Yarbro, T.; Baumgart, C.

    1998-01-01

    The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines these large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results

  12. The Neural-fuzzy Thermal Error Compensation Controller on CNC Machining Center

    Science.gov (United States)

    Tseng, Pai-Chung; Chen, Shen-Len

    The geometric errors and structural thermal deformation are factors that influence the machining accuracy of Computer Numerical Control (CNC) machining center. Therefore, researchers pay attention to thermal error compensation technologies on CNC machine tools. Some real-time error compensation techniques have been successfully demonstrated in both laboratories and industrial sites. The compensation results still need to be enhanced. In this research, the neural-fuzzy theory has been conducted to derive a thermal prediction model. An IC-type thermometer has been used to detect the heat sources temperature variation. The thermal drifts are online measured by a touch-triggered probe with a standard bar. A thermal prediction model is then derived by neural-fuzzy theory based on the temperature variation and the thermal drifts. A Graphic User Interface (GUI) system is also built to conduct the user friendly operation interface with Insprise C++ Builder. The experimental results show that the thermal prediction model developed by neural-fuzzy theory methodology can improve machining accuracy from 80µm to 3µm. Comparison with the multi-variable linear regression analysis the compensation accuracy is increased from ±10µm to ±3µm.

  13. A queueing model for error control of partial buffer sharing in ATM

    Directory of Open Access Journals (Sweden)

    Ahn Boo Yong

    1999-01-01

    Full Text Available We model the error control of the partial buffer sharing of ATM by a queueing system M 1 , M 2 / G / 1 / K + 1 with threshold and instantaneous Bernoulli feedback. We first derive the system equations and develop a recursive method to compute the loss probabilities at an arbitrary time epoch. We then build an approximation scheme to compute the mean waiting time of each class of cells. An algorithm is developed for finding the optimal threshold and queue capacity for a given quality of service.

  14. Brezzi-Pitkaranta stabilization and a priori error analysis for the Stokes Control

    Directory of Open Access Journals (Sweden)

    Aytekin Cibik

    2016-12-01

    Full Text Available In this study, we consider a Brezzi-Pitkaranta stabilization scheme for the optimal control problem governed by stationary Stokes equation, using a P1-P1 interpolation for velocity and pressure. We express the stabilization as extra terms added to the discrete variational form of the problem.  We first prove the stability of the finite element discretization of the problem. Then, we derive a priori error bounds for each variable and present a numerical example to show the effectiveness of the stabilization clearly.

  15. Automated systems help prevent operator error during [reactor] I and C [instrumentation and control] testing

    International Nuclear Information System (INIS)

    Courcoux, R.

    1989-01-01

    On a nuclear steam supply system, even a minor failure can involve actuation of the whole reactor protection system (RPS). To reduce the likelihood of human error leading to unwanted trips during the maintenance of instrumentation and control systems, Framatome has been developing and installing various automated testing systems. Such automated systems are particularly helpful when periodic tests with a potential for RPS actuation have to be carried out, or when the test is on the critical path for the refuelling outage. The Sensitive Channel Programme described is an example of the sort of work that has been done. (author)

  16. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  17. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  18. Adaptive control of nonlinear system using online error minimum neural networks.

    Science.gov (United States)

    Jia, Chao; Li, Xiaoli; Wang, Kang; Ding, Dawei

    2016-11-01

    In this paper, a new learning algorithm named OEM-ELM (Online Error Minimized-ELM) is proposed based on ELM (Extreme Learning Machine) neural network algorithm and the spreading of its main structure. The core idea of this OEM-ELM algorithm is: online learning, evaluation of network performance, and increasing of the number of hidden nodes. It combines the advantages of OS-ELM and EM-ELM, which can improve the capability of identification and avoid the redundancy of networks. The adaptive control based on the proposed algorithm OEM-ELM is set up which has stronger adaptive capability to the change of environment. The adaptive control of chemical process Continuous Stirred Tank Reactor (CSTR) is also given for application. The simulation results show that the proposed algorithm with respect to the traditional ELM algorithm can avoid network redundancy and improve the control performance greatly. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Do People With Severe Traumatic Brain Injury Benefit From Making Errors? A Randomized Controlled Trial of Error-Based and Errorless Learning.

    Science.gov (United States)

    Ownsworth, Tamara; Fleming, Jennifer; Tate, Robyn; Beadle, Elizabeth; Griffin, Janelle; Kendall, Melissa; Schmidt, Julia; Lane-Brown, Amanda; Chevignard, Mathilde; Shum, David H K

    2017-12-01

    Errorless learning (ELL) and error-based learning (EBL) are commonly used approaches to rehabilitation for people with traumatic brain injury (TBI). However, it is unknown whether making errors is beneficial in the learning process to promote skills generalization after severe TBI. To compare the efficacy of ELL and EBL for improving skills generalization, self-awareness, behavioral competency, and psychosocial functioning after severe TBI. A total of 54 adults (79% male; mean age = 38.0 years, SD = 13.4) with severe TBI were randomly allocated to ELL or EBL and received 8 × 1.5-hour therapy sessions that involved meal preparation and other goal-directed activities. The primary outcome was total errors on the Cooking Task (near-transfer). Secondary outcome measures included the Zoo Map Test (far-transfer), Awareness Questionnaire, Patient Competency Rating Scale, Sydney Psychosocial Reintegration Scale, and Care and Needs Scale. Controlling for baseline performance and years of education, participants in the EBL group made significantly fewer errors at postintervention (mean = 36.25; 95% CI = 32.5-40.0) than ELL participants (mean = 42.57; 95% CI = 38.8-46.3). EBL participants also demonstrated greater self-awareness and behavioral competency at postintervention than ELL participants ( P .05), or at the 6-month follow-up assessment. EBL was found to be more effective than ELL for enhancing skills generalization on a task related to training and improving self-awareness and behavioral competency.

  20. Control of Human Error and comparison Level risk after correction action With the SHERPA Method in a control Room of petrochemical industry

    Directory of Open Access Journals (Sweden)

    A. Zakerian

    2011-12-01

    Full Text Available Background and aims Today in many jobs like nuclear, military and chemical industries, human errors may result in a disaster. Accident in different places of the world emphasizes this subject and we indicate for example, Chernobyl disaster in (1986, tree Mile accident in (1974 and Flixborough explosion in (1974.So human errors identification especially in important and intricate systems is necessary and unavoidable for predicting control methods.   Methods Recent research is a case study and performed in Zagross Methanol Company in Asalouye (South pars.   Walking –Talking through method with process expert and control room operators, inspecting technical documents are used for collecting required information and completing Systematic Human Error Reductive and Predictive Approach (SHERPA worksheets.   Results analyzing SHERPA worksheet indicated that, were accepting capable invertebrate errors % 71.25, % 26.75 undesirable errors, % 2 accepting capable(with change errors, % 0 accepting capable errors, and after correction action forecast Level risk to this arrangement, accepting capable invertebrate errors % 0, % 4.35 undesirable errors , % 58.55 accepting capable(with change errors, % 37.1 accepting capable errors .   ConclusionFinally this result is comprehension that this method in different industries especially in chemical industries is enforceable and useful for human errors identification that may lead to accident and adventures.

  1. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    Science.gov (United States)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  2. A Toroidally Symmetric Plasma Simulation code for design of position and shape control on tokamak plasmas

    International Nuclear Information System (INIS)

    Takase, Haruhiko; Senda, Ikuo

    1999-01-01

    A Toroidally Symmetric Plasma Simulation (TSPS) code has been developed for investigating the position and shape control on tokamak plasmas. The analyses of three-dimensional eddy currents on the conducting components around the plasma and the two-dimensional magneto-hydrodynamic (MHD) equilibrium are taken into account in this code. The code can analyze the plasma position and shape control during the minor disruption in which the deformation of plasma is not negligible. Using the ITER (International Thermonuclear Experimental Reactor) parameters, some examples of calculations are shown in this paper. (author)

  3. Fuzzy Adaptation Algorithms’ Control for Robot Manipulators with Uncertainty Modelling Errors

    Directory of Open Access Journals (Sweden)

    Yongqing Fan

    2018-01-01

    Full Text Available A novel fuzzy control scheme with adaptation algorithms is developed for robot manipulators’ system. At the beginning, one adjustable parameter is introduced in the fuzzy logic system, the robot manipulators system with uncertain nonlinear terms as the master device and a reference model dynamic system as the slave robot system. To overcome the limitations such as online learning computation burden and logic structure in conventional fuzzy logic systems, a parameter should be used in fuzzy logic system, which composes fuzzy logic system with updated parameter laws, and can be formed for a new fashioned adaptation algorithms controller. The error closed-loop dynamical system can be stabilized based on Lyapunov analysis, the number of online learning computation burdens can be reduced greatly, and the different kinds of fuzzy logic systems with fuzzy rules or without any fuzzy rules are also suited. Finally, effectiveness of the proposed approach has been shown in simulation example.

  4. An empirical study on the human error recovery failure probability when using soft controls in NPP advanced MCRs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2014-01-01

    Highlights: • Many researchers have tried to understand human recovery process or step. • Modeling human recovery process is not sufficient to be applied to HRA. • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • Recovery failure probability in a soft control operation environment is investigated. • Recovery failure probability here would be important evidence for expert judgment. - Abstract: It is well known that probabilistic safety assessments (PSAs) today consider not just hardware failures and environmental events that can impact upon risk, but also human error contributions. Consequently, the focus on reliability and performance management has been on the prevention of human errors and failures rather than the recovery of human errors. However, the recovery of human errors is as important as the prevention of human errors and failures for the safe operation of nuclear power plants (NPPs). For this reason, many researchers have tried to find a human recovery process or step. However, modeling the human recovery process is not sufficient enough to be applied to human reliability analysis (HRA), which requires human error and recovery probabilities. In this study, therefore, human error recovery failure probabilities based on predefined human error modes were investigated by conducting experiments in the operation mockup of advanced/digital main control rooms (MCRs) in NPPs. To this end, 48 subjects majoring in nuclear engineering participated in the experiments. In the experiments, using the developed accident scenario based on tasks from the standard post trip action (SPTA), the steam generator tube rupture (SGTR), and predominant soft control tasks, which are derived from the loss of coolant accident (LOCA) and the excess steam demand event (ESDE), all error detection and recovery data based on human error modes were checked with the performance sheet and the statistical analysis of error recovery/detection was then

  5. Novel prescribed performance neural control of a flexible air-breathing hypersonic vehicle with unknown initial errors.

    Science.gov (United States)

    Bu, Xiangwei; Wu, Xiaoyan; Zhu, Fujing; Huang, Jiaqi; Ma, Zhen; Zhang, Rui

    2015-11-01

    A novel prescribed performance neural controller with unknown initial errors is addressed for the longitudinal dynamic model of a flexible air-breathing hypersonic vehicle (FAHV) subject to parametric uncertainties. Different from traditional prescribed performance control (PPC) requiring that the initial errors have to be known accurately, this paper investigates the tracking control without accurate initial errors via exploiting a new performance function. A combined neural back-stepping and minimal learning parameter (MLP) technology is employed for exploring a prescribed performance controller that provides robust tracking of velocity and altitude reference trajectories. The highlight is that the transient performance of velocity and altitude tracking errors is satisfactory and the computational load of neural approximation is low. Finally, numerical simulation results from a nonlinear FAHV model demonstrate the efficacy of the proposed strategy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Gain control network conditions in early sensory coding.

    Directory of Open Access Journals (Sweden)

    Eduardo Serrano

    Full Text Available Gain control is essential for the proper function of any sensory system. However, the precise mechanisms for achieving effective gain control in the brain are unknown. Based on our understanding of the existence and strength of connections in the insect olfactory system, we analyze the conditions that lead to controlled gain in a randomly connected network of excitatory and inhibitory neurons. We consider two scenarios for the variation of input into the system. In the first case, the intensity of the sensory input controls the input currents to a fixed proportion of neurons of the excitatory and inhibitory populations. In the second case, increasing intensity of the sensory stimulus will both, recruit an increasing number of neurons that receive input and change the input current that they receive. Using a mean field approximation for the network activity we derive relationships between the parameters of the network that ensure that the overall level of activity of the excitatory population remains unchanged for increasing intensity of the external stimulation. We find that, first, the main parameters that regulate network gain are the probabilities of connections from the inhibitory population to the excitatory population and of the connections within the inhibitory population. Second, we show that strict gain control is not achievable in a random network in the second case, when the input recruits an increasing number of neurons. Finally, we confirm that the gain control conditions derived from the mean field approximation are valid in simulations of firing rate models and Hodgkin-Huxley conductance based models.

  7. Bio-inspired adaptive feedback error learning architecture for motor control.

    Science.gov (United States)

    Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo

    2012-10-01

    This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).

  8. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.

    2011-01-01

    To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature...... and flux saturation, current and voltage errors due to measurement uncertainties, and signal delay caused by hardwares. This paper reveals some inherent principles for the performance of the back-EMF based sensorless algorithm embedded in a surface mounted PMSM system adapting vector control strategy...

  9. Over 10 dB Net Coding Gain Based on 20% Overhead Hard Decision Forward Error Correction in 100G Optical Communication Systems

    DEFF Research Database (Denmark)

    Li, Bomin; Larsen, Knud J.; Zibar, Darko

    2011-01-01

    We propose a product code with shortened BCH component codes for 100G optical communication systems. Simulation result shows that 10 dB net coding gain is promising at post- FEC BER of 1E-15.......We propose a product code with shortened BCH component codes for 100G optical communication systems. Simulation result shows that 10 dB net coding gain is promising at post- FEC BER of 1E-15....

  10. Modelling of the Rod Control System in the coupled code RELAP5-QUABOX/CUBBOX

    International Nuclear Information System (INIS)

    Bencik, V.; Feretic, D.; Grgic, D.

    1999-01-01

    There is a general agreement that for many light water reactor transient calculations, it is necessary to use a multidimensional neutron kinetics model coupled to sophisticated thermal-hydraulic models in order to obtain satisfactory results. These calculations are needed for a variety of applications for licensing safety analyses, probabilistic risk assessment, operational support, and training. At FER, Zagreb, a coupling of 3D neutronics code QUABOX/CUBBOX and system code RELAP5 was performed. In the paper the Rod Control System model in the RELAP5 part of the coupled code is presented. A first testing of the model was performed by calculation of reactor trip from full power for NPP Krsko. Results of 3D neutronics calculation obtained by coupled code QUABOX/CUBBOX were compared with point kinetics calculation performed with RELAP5 stand alone code.(author)

  11. A Bayesian sequential design using alpha spending function to control type I error.

    Science.gov (United States)

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  12. Controlling type I error rate for fast track drug development programmes.

    Science.gov (United States)

    Shih, Weichung J; Ouyang, Peter; Quan, Hui; Lin, Yong; Michiels, Bart; Bijnens, Luc

    2003-03-15

    The U.S. Food and Drug Administration (FDA) Modernization Act of 1997 has a Section (No. 112) entitled 'Expediting Study and Approval of Fast Track Drugs' (the Act). In 1998, the FDA issued a 'Guidance for Industry: the Fast Track Drug Development Programs' (the FTDD programmes) to meet the requirement of the Act. The purpose of FTDD programmes is to 'facilitate the development and expedite the review of new drugs that are intended to treat serious or life-threatening conditions and that demonstrate the potential to address unmet medical needs'. Since then many health products have reached patients who suffered from AIDS, cancer, osteoporosis, and many other diseases, sooner by utilizing the Fast Track Act and the FTDD programmes. In the meantime several scientific issues have also surfaced when following the FTDD programmes. In this paper we will discuss the concept of two kinds of type I errors, namely, the 'conditional approval' and the 'final approval' type I errors, and propose statistical methods for controlling them in a new drug submission process. Copyright 2003 John Wiley & Sons, Ltd.

  13. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    International Nuclear Information System (INIS)

    Carl Stern; Martin Lee

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models

  14. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    CERN Document Server

    Carl-Stern

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models.

  15. Measurement of Compression Factor and Error Sensitivity Factor of Facsimile Coding Techniques Submitted to the CCITT By Great Britain and the Federal Republic of Germany

    Science.gov (United States)

    1979-10-01

    D CD I- l M C’ m (’-01 V-~ "r c ml C- 00 0 v ~ ~ v V. v vCvq; o LO tol vo oo Co Lo 0 tD CI C CO U) 0 -4 l m1C’ .4oC cn z m’N t’l - ccC l m Z o .- 4C t...00013 IrTA 11 F j~)f~, I)~ l53 t.l .. AsOOM lf DATA CJ-3.(to S.1)sCODE(29 591)vCO0E(3s 5o11/ 49 b#ZOOOB/ 00013 DATA COJE(A. 69l)oC30E(29 691)*CCDE-(39...nn~i-,, ~.2 2-LCJM - 1 A, A -AI .DREfr Td ~ 92It 61 - ff~ *CTABLEW(16).CSTART(16).STBUF(1728JSrRUN(1728) 00004i C0MMOC4.E4iAY /ERROR S( 25001 00004

  16. Knowledge-based errors in anesthesia: a paired, controlled trial of learning and retention.

    Science.gov (United States)

    Goldhaber-Fiebert, Sara N; Goldhaber-Fiebert, Jeremy D; Rosow, Carl E

    2009-01-01

    Optimizing patient safety by improving the training of physicians is a major challenge of medical education. In this pilot study, we hypothesized that a brief lecture, targeted to rare but potentially dangerous situations, could improve anesthesia practitioners' knowledge levels with significant retention of learning at six months. In this paired controlled trial, anesthesia residents and attending physicians at Massachusetts General Hospital took the same 14-question multiple choice examination three times: at baseline, immediately after a brief lecture, and six months later. The lecture covered material on seven "intervention" questions; the remaining seven were "control" questions. The authors measured immediate knowledge acquisition, defined as the change in percentage of correct answers on intervention questions between baseline and post-lecture, and measured learning retention as the difference between baseline and six months. Both measurements were corrected for change in performance on control questions. Fifty of the 89 subjects completed all three examinations. The post-lecture increase in percentage of questions answered correctly, adjusted for control, was 22.2% [95% confidence interval (CI) 16.0-28.4%; P learning at six months. Exposing residents or other practitioners to this type of inexpensive teaching intervention may help them to avoid preventable uncommon errors that are rooted in unfamiliarity with the situation or the equipment. The methods used for this study may also be applied to compare the effect of various other teaching modalities while, at the same time, preserving participant anonymity and making adjustments for ongoing learning.

  17. The impact of a brief mindfulness meditation intervention on cognitive control and error-related performance monitoring

    Directory of Open Access Journals (Sweden)

    Michael J Larson

    2013-07-01

    Full Text Available Meditation is associated with positive health behaviors and improved cognitive control. One mechanism for the relationship between meditation and cognitive control is changes in activity of the anterior cingulate cortex-mediated neural pathways. The error-related negativity (ERN and error positivity (Pe components of the scalp-recorded event-related potential (ERP represent cingulate-mediated functions of performance monitoring that may be modulated by mindfulness meditation. We utilized a flanker task, an experimental design, and a brief mindfulness intervention in a sample of 55 healthy non-meditators (n = 28 randomly assigned to the mindfulness group and n = 27 randomly assigned to the control group to examine autonomic nervous system functions as measured by blood pressure and indices of cognitive control as measured by response times, error rates, post-error slowing, and the ERN and Pe components of the ERP. Systolic blood pressure significantly differentiated groups following the mindfulness intervention and following the flanker task. There were non-significant differences between the mindfulness and control groups for response times, post-error slowing, and error rates on the flanker task. Amplitude and latency of the ERN did not differ between groups; however, amplitude of the Pe was significantly smaller in individuals in the mindfulness group than in the control group. Findings suggest that a brief mindfulness intervention is associated with reduced autonomic arousal and decreased amplitude of the Pe, an ERP associated with error awareness, attention, and motivational salience, but does not alter amplitude of the ERN or behavioral performance. Implications for brief mindfulness interventions and state versus trait affect theories of the ERN are discussed. Future research examining graded levels of mindfulness and tracking error awareness will clarify relationship between mindfulness and performance monitoring.

  18. Working memory and inhibitory control across the life span: Intrusion errors in the Reading Span Test.

    Science.gov (United States)

    Robert, Christelle; Borella, Erika; Fagot, Delphine; Lecerf, Thierry; de Ribaupierre, Anik

    2009-04-01

    The aim of this study was to examine to what extent inhibitory control and working memory capacity are related across the life span. Intrusion errors committed by children and younger and older adults were investigated in two versions of the Reading Span Test. In Experiment 1, a mixed Reading Span Test with items of various list lengths was administered. Older adults and children recalled fewer correct words and produced more intrusions than did young adults. Also, age-related differences were found in the type of intrusions committed. In Experiment 2, an adaptive Reading Span Test was administered, in which the list length of items was adapted to each individual's working memory capacity. Age groups differed neither on correct recall nor on the rate of intrusions, but they differed on the type of intrusions. Altogether, these findings indicate that the availability of attentional resources influences the efficiency of inhibition across the life span.

  19. EXTRAN: A computer code for estimating concentrations of toxic substances at control room air intakes

    International Nuclear Information System (INIS)

    Ramsdell, J.V.

    1991-03-01

    This report presents the NRC staff with a tool for assessing the potential effects of accidental releases of radioactive materials and toxic substances on habitability of nuclear facility control rooms. The tool is a computer code that estimates concentrations at nuclear facility control room air intakes given information about the release and the environmental conditions. The name of the computer code is EXTRAN. EXTRAN combines procedures for estimating the amount of airborne material, a Gaussian puff dispersion model, and the most recent algorithms for estimating diffusion coefficients in building wakes. It is a modular computer code, written in FORTRAN-77, that runs on personal computers. It uses a math coprocessor, if present, but does not require one. Code output may be directed to a printer or disk files. 25 refs., 8 figs., 4 tabs

  20. Adaptive finite element analysis of incompressible viscous flow using posteriori error estimation and control of node density distribution

    International Nuclear Information System (INIS)

    Yashiki, Taturou; Yagawa, Genki; Okuda, Hiroshi

    1995-01-01

    The adaptive finite element method based on an 'a posteriori error estimation' is known to be a powerful technique for analyzing the engineering practical problems, since it excludes the instinctive aspect of the mesh subdivision and gives high accuracy with relatively low computational cost. In the adaptive procedure, both the error estimation and the mesh generation according to the error estimator are essential. In this paper, the adaptive procedure is realized by the automatic mesh generation based on the control of node density distribution, which is decided according to the error estimator. The global percentage error, CPU time, the degrees of freedom and the accuracy of the solution of the adaptive procedure are compared with those of the conventional method using regular meshes. Such numerical examples as the driven cavity flows of various Reynolds numbers and the flows around a cylinder have shown the very high performance of the proposed adaptive procedure. (author)

  1. The symbol coding language for the BUTs processor of in-core reactor control systems

    International Nuclear Information System (INIS)

    Vorob'ev, D.M.; Golovanov, M.N.; Levin, G.L.; Parfenova, T.K.; Filatov, V.P.

    1978-01-01

    A symbolic coding language is described; it has been developed for automation of making up programs for in-core control systems. The systems use the ideology of the CAMAC-VECTOR system and include the BUTs-20 processor. The symbolic coding language has been developed as a programming language of the ASSEMBLER type. Operators of instructions and pseudo-instructions, the rules of reading in the text of the source program, and operator record formats are considered

  2. SU-F-T-241: Reduction in Planning Errors Via a Process Control Developed Using the Eclipse Scripting API

    Energy Technology Data Exchange (ETDEWEB)

    Barbee, D; McCarthy, A; Galavis, P; Xu, A [NYU Langone Medical Center, New York, NY (United States)

    2016-06-15

    Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# to check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria

  3. Vision Servo Motion Control and Error Analysis of a Coplanar XXY Stage for Image Alignment Motion

    Directory of Open Access Journals (Sweden)

    Hau-Wei Lee

    2013-01-01

    Full Text Available In recent years, as there is demand for smart mobile phones with touch panels, the alignment/compensation system of alignment stage with vision servo control has also increased. Due to the fact that the traditional stacked-type XYθ stage has cumulative errors of assembly and it is heavy, it has been gradually replaced by the coplanar stage characterized by three actuators on the same plane with three degrees of freedom. The simplest image alignment mode uses two cameras as the equipments for feedback control, and the work piece is placed on the working stage. The work piece is usually engraved/marked. After the cameras capture images and when the position of the mark in the camera is obtained by image processing, the mark can be moved to the designated position in the camera by moving the stage and using alignment algorithm. This study used a coplanar XXY stage with 1 μm positioning resolution. Due to the fact that the resolution of the camera is about 3.75 μm per pixel, thus a subpixel technology is used, and the linear and angular alignment repeatability of the alignment system can achieve 1 μm and 5 arcsec, respectively. The visual servo motion control for alignment motion is completed within 1 second using the coplanar XXY stage.

  4. Controlled dense coding for continuous variables using three-particle entangled states

    CERN Document Server

    Jing Zhang; Kun Chi Peng; 10.1103/PhysRevA.66.032318

    2002-01-01

    A simple scheme to realize quantum controlled dense coding with a bright tripartite entangled state light generated from nondegenerate optical parametric amplifiers is proposed in this paper. The quantum channel between Alice and Bob is controlled by Claire. As a local oscillator and balanced homodyne detector are not needed, the proposed protocol is easy to be realized experimentally. (15 refs)

  5. Controller routines for the DECsystem-10 with application to a tandem-mirror plasma code

    International Nuclear Information System (INIS)

    Faul, D.R.; Devoto, R.S.

    1979-01-01

    FORTRAN-callable subroutines have been written to enable controller--controllee interaction on the LLL DECsystem-10. These subroutines have been used to construct a controller (XTCTMR) for a tandem-mirror physics code (CTCTMR). A description of the subroutines and their use is presented. Also, sample results are given

  6. Auto Code Generation for Simulink-Based Attitude Determination Control System

    Science.gov (United States)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.

  7. Performance Evaluation of HARQ Technique with UMTS Turbo Code

    Directory of Open Access Journals (Sweden)

    S. S. Brkić

    2011-11-01

    Full Text Available The hybrid automatic repeat request technique (HARQ represents the error control principle which combines an error correcting code and automatic repeat request procedure (ARQ, within the same transmission system. In this paper, using Monte Carlo simulation process, the characteristics of HARQ technique are determined, for the case of the Universal Mobile Telecommunication System (UMTS turbo code.

  8. The Institute for Safe Medication Practices and Poison Control Centers: Collaborating to Prevent Medication Errors and Unintentional Poisonings.

    Science.gov (United States)

    Vaida, Allen J

    2015-06-01

    This article provides an overview on the Institute for Safe Medication Practices (ISMP), the only independent nonprofit organization in the USA devoted to the prevention of medication errors. ISMP developed the national Medication Errors Reporting Program (MERP) and investigates and analyzes errors in order to formulate recommendations to prevent further occurrences. ISMP works closely with the US Food and Drug Administration (FDA), drug manufacturers, professional organizations, and others to promote changes in package design, practice standards, and healthcare practitioner and consumer education. By collaborating with ISMP to share and disseminate information, Poison Control centers, emergency departments, and toxicologists can help decrease unintentional and accidental poisonings.

  9. Resisting attraction: Individual differences in executive control are associated with subject-verb agreement errors in production.

    Science.gov (United States)

    Veenstra, Alma; Antoniou, Kyriakos; Katsos, Napoleon; Kissine, Mikhail

    2018-04-19

    We propose that attraction errors in agreement production (e.g., the key to the cabinets are missing) are related to two components of executive control: working memory and inhibitory control. We tested 138 children aged 10 to 12, an age when children are expected to produce high rates of errors. To increase the potential of individual variation in executive control skills, participants came from monolingual, bilingual, and bidialectal language backgrounds. Attraction errors were elicited with a picture description task in Dutch and executive control was measured with a digit span task, Corsi blocks task, switching task, and attentional networks task. Overall, higher rates of attraction errors were negatively associated with higher verbal working memory and, independently, with higher inhibitory control. To our knowledge, this is the first demonstration of the role of both working memory and inhibitory control in attraction errors in production. Implications for memory- and grammar-based models are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Training and support to improve ICD coding quality: A controlled before-and-after impact evaluation

    Directory of Open Access Journals (Sweden)

    Robin Dyers

    2017-06-01

    Full Text Available Background. The proposed National Health Insurance policy for South Africa (SA requires hospitals to maintain high-quality International Statistical Classification of Diseases (ICD codes for patient records. While considerable strides had been made to improve ICD coding coverage by digitising the discharge process in the Western Cape Province, further intervention was required to improve data quality. The aim of this controlled before-and-after study was to evaluate the impact of a clinician training and support initiative to improve ICD coding quality. Objective. To compare ICD coding quality between two central hospitals in the Western Cape before and after the implementation of a training and support initiative for clinicians at one of the sites. Methods. The difference in differences in data quality between the intervention site and the control site was calculated. Multiple logistic regression was also used to determine the odds of data quality improvement after the intervention and to adjust for potential differences between the groups. Results. The intervention had a positive impact of 38.0% on ICD coding completeness over and above changes that occurred at the control site. Relative to the baseline, patient records at the intervention site had a 6.6 (95% confidence interval 3.5 - 16.2 adjusted odds ratio of having a complete set of ICD codes for an admission episode after the introduction of the training and support package. The findings on impact on ICD coding accuracy were not significant. Conclusion. There is sufficient pragmatic evidence that a training and support package will have a considerable positive impact on ICD coding completeness in the SA setting.

  11. Training and support to improve ICD coding quality: A controlled before-and-after impact evaluation.

    Science.gov (United States)

    Dyers, Robin; Ward, Grant; Du Plooy, Shane; Fourie, Stephanus; Evans, Juliet; Mahomed, Hassan

    2017-05-24

    The proposed National Health Insurance policy for South Africa (SA) requires hospitals to maintain high-quality International Statistical Classification of Diseases (ICD) codes for patient records. While considerable strides had been made to improve ICD coding coverage by digitising the discharge process in the Western Cape Province, further intervention was required to improve data quality. The aim of this controlled before-and-after study was to evaluate the impact of a clinician training and support initiative to improve ICD coding quality. To compare ICD coding quality between two central hospitals in the Western Cape before and after the implementation of a training and support initiative for clinicians at one of the sites. The difference in differences in data quality between the intervention site and the control site was calculated. Multiple logistic regression was also used to determine the odds of data quality improvement after the intervention and to adjust for potential differences between the groups. The intervention had a positive impact of 38.0% on ICD coding completeness over and above changes that occurred at the control site. Relative to the baseline, patient records at the intervention site had a 6.6 (95% confidence interval 3.5 - 16.2) adjusted odds ratio of having a complete set of ICD codes for an admission episode after the introduction of the training and support package. The findings on impact on ICD coding accuracy were not significant. There is sufficient pragmatic evidence that a training and support package will have a considerable positive impact on ICD coding completeness in the SA setting.

  12. CRA Control Logic Realization for MARS 1-D/MASTER coupled Code System

    International Nuclear Information System (INIS)

    Han, Soonkyoo; Jeong, Sungsu; Lee, Suyong

    2013-01-01

    Both Multi-dimensional Analysis Reactor Safety (MARS) code and Multi-purpose Analyzer for Static and Transient Effects of Reactors (MASTER) code, developed by Korea Atomic Energy Research Institute (KAERI), can be coupled for various simulations of nuclear reactor system. In the MARS 1-D/MASTER coupled code system, MARS is used for the thermal hydraulic calculations and MASTER is used for reactor core calculations. In case of using this coupled code system, the movements of control rod assembly (CRA) are controlled by MASTER. MASTER, however, has a CRA control function which is inputted by user as a form of time dependent table. When simulations related to sequential CRA insertion or withdrawal which are not ejection or drop are performed, this CRA control function is not sufficient to demonstrate the process of CRA movements. Therefore an alternative way is proposed for realization of CRA control logic in MASTER. In this study, the manually realized CRA control logic was applied by inputting the time dependent CRA positions into MASTER. And the points of CRA movements were decided by iterations. At the end of CRA movement, the reactor power difference and the average coolant temperature difference were not out of the range of their dead bands. Therefore it means that this manually realized CRA control logic works appropriately in the dead bands of the logic. Therefore the proper CRA movement points could be decided by using this manually realized CRA control logic. Based on these results, it is verified that the proper CRA movement points can be chosen by using the proposed CRA control logic in this article. In conclusion, it is expected that this proposed CRA control logic in MASTER can be used to properly demonstrate the process related to CRA sequential movements in the MARS 1-D/MASTER coupled code system

  13. Integration of error tolerance into the design of control rooms of nuclear power plants

    International Nuclear Information System (INIS)

    Sepanloo, Kamran

    1998-08-01

    Many complex technological systems' failures have been attributed to human errors. Today, based on extensive research on the role of human element in technological systems it is known that human error can not totally be eliminated in modern, flexible, or changing work environments by conventional style design strategies(e.g. defence in depth), or better instructions nor should they be. Instead, the operators' ability to explore degrees of freedom should be supported and means for recovering from the effects of errors should be included. This calls for innovative error tolerant design of technological systems. Integration of error tolerant concept into the design, construction, startup, and operation of nuclear power plants provides an effective means of reducing human error occurrence during all stages of life of it and therefore leads to considerable enhancement of plant's safety

  14. Error-enhancing robot therapy to induce motor control improvement in childhood onset primary dystonia

    Directory of Open Access Journals (Sweden)

    Casellato Claudia

    2012-07-01

    Full Text Available Abstract Background Robot-generated deviating forces during multijoint reaching movements have been applied to investigate motor control and to tune neuromotor adaptation. Can the application of force to limbs improve motor learning? In this framework, the response to altered dynamic environments of children affected by primary dystonia has never been studied. Methods As preliminary pilot study, eleven children with primary dystonia and eleven age-matched healthy control subjects were asked to perform upper limb movements, triangle-reaching (three directions and circle-writing, using a haptic robot interacting with ad-hoc developed task-specific visual interfaces. Three dynamic conditions were provided, null additive external force (A, constant disturbing force (B and deactivation of the additive external force again (C. The path length for each trial was computed, from the recorded position data and interaction events. Results The results show that the disturbing force affects significantly the movement outcomes in healthy but not in dystonic subjects, already compromised in the reference condition: the external alteration uncalibrates the healthy sensorimotor system, while the dystonic one is already strongly uncalibrated. The lack of systematic compensation for perturbation effects during B condition is reflected into the absence of after-effects in C condition, which would be the evidence that CNS generates a prediction of the perturbing forces using an internal model of the environment. The most promising finding is that in dystonic population the altered dynamic exposure seems to induce a subsequent improvement, i.e. a beneficial after-effect in terms of optimal path control, compared with the correspondent reference movement outcome. Conclusions The short-time error-enhancing training in dystonia could represent an effective approach for motor performance improvement, since the exposure to controlled dynamic alterations induces a refining

  15. Detecting and correcting partial errors: Evidence for efficient control without conscious access.

    Science.gov (United States)

    Rochet, N; Spieser, L; Casini, L; Hasbroucq, T; Burle, B

    2014-09-01

    Appropriate reactions to erroneous actions are essential to keeping behavior adaptive. Erring, however, is not an all-or-none process: electromyographic (EMG) recordings of the responding muscles have revealed that covert incorrect response activations (termed "partial errors") occur on a proportion of overtly correct trials. The occurrence of such "partial errors" shows that incorrect response activations could be corrected online, before turning into overt errors. In the present study, we showed that, unlike overt errors, such "partial errors" are poorly consciously detected by participants, who could report only one third of their partial errors. Two parameters of the partial errors were found to predict detection: the surface of the incorrect EMG burst (larger for detected) and the correction time (between the incorrect and correct EMG onsets; longer for detected). These two parameters provided independent information. The correct(ive) responses associated with detected partial errors were larger than the "pure-correct" ones, and this increase was likely a consequence, rather than a cause, of the detection. The respective impacts of the two parameters predicting detection (incorrect surface and correction time), along with the underlying physiological processes subtending partial-error detection, are discussed.

  16. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  17. Study on a new framework of Human Reliability Analysis to evaluate soft control execution error in advanced MCRs of NPPs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2016-01-01

    Highlights: • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • The operation action in NPP Advanced MCRs is performed by soft control. • New HRA framework should be considered in the HRA for advanced MCRs. • HRA framework for evaluation of soft control execution human error is suggested. • Suggested method will be helpful to analyze human reliability in advance MCRs. - Abstract: Since the Three Mile Island (TMI)-2 accident, human error has been recognized as one of the main causes of Nuclear Power Plant (NPP) accidents, and numerous studies related to Human Reliability Analysis (HRA) have been carried out. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). However, the operating environment of MCRs in NPPs has changed with the adoption of new Human-System Interfaces (HSIs) that are based on computer-based technologies. The MCRs that include these digital technologies, such as large display panels, computerized procedures, and soft controls, are called advanced MCRs. Among the many features of advanced MCRs, soft controls are a particularly important feature because operating actions in NPP advanced MCRs are performed by soft control. Due to the differences in interfaces between soft control and hardwired conventional type control, different Human Error Probabilities (HEPs) and a new HRA framework should be considered in the HRA for advanced MCRs. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing a soft control task analysis and the literature regarding widely accepted human error taxonomies is reviewed. Moreover, since most current HRA databases deal with operation in conventional MCRs and are not explicitly designed to deal with digital HSIs, empirical analysis of human error and error recovery considering soft controls under an advanced MCR mockup are carried out to collect human error data, which is

  18. Automated approach to quantitative error analysis. [BEAPAC-1T and -3T, for testing reliability of computer codes, in FORTRAN for IBM 370/195 and CDC 6400

    Energy Technology Data Exchange (ETDEWEB)

    Bareiss, E.H.

    1977-04-01

    A method is described how a quantitative measure for the robustness of a given neutron transport theory code for coarse network calculations can be obtained. A code that performs this task automatically and at only nominal cost is described. This code also generates user-oriented benchmark problems which exhibit the analytic behavior at interfaces. 5 figures, 1 table.

  19. Value-based HR practices, i-deals and clinical error control with CSR as a moderator.

    Science.gov (United States)

    Luu, Tuan; Rowley, Chris; Siengthai, Sununta; Thanh Thao, Vo

    2017-05-08

    Purpose Notwithstanding the rising magnitude of system factors in patient safety improvement, "human factors" such as idiosyncratic deals (i-deals) which also contribute to the adjustment of system deficiencies should not be neglected. The purpose of this paper is to investigate the role of value-based HR practices in catalyzing i-deals, which then influence clinical error control. The research further examines the moderating role of corporate social responsibility (CSR) on the effect of value-based HR practices on i-deals. Design/methodology/approach The data were collected from middle-level clinicians from hospitals in the Vietnam context. Findings The research results confirmed the effect chain from value-based HR practices through i-deals to clinical error control with CSR as a moderator. Originality/value The HRM literature is expanded through enlisting i-deals and clinical error control as the outcomes of HR practices.

  20. Robust a Posteriori Error Control and Adaptivity for Multiscale, Multinumerics, and Mortar Coupling

    KAUST Repository

    Pencheva, Gergina V.; Vohralí k, Martin; Wheeler, Mary F.; Wildey, Tim

    2013-01-01

    -order polynomials are used on the mortar interface mesh. We derive several fully computable a posteriori error estimates which deliver a guaranteed upper bound on the error measured in the energy norm. Our estimates are also locally efficient and one of them

  1. Power Control and Coding Formulation for State Estimation with Wireless Sensors

    DEFF Research Database (Denmark)

    Quevedo, Daniel; Østergaard, Jan; Ahlen, Anders

    2014-01-01

    efficient communication. In this paper, we examine the role of power control and coding for Kalman filtering over wireless correlated channels. Two estimation architectures are considered; initially, the sensors send their measurements directly to a single gateway (GW). Next, wireless relay nodes provide...... additional links. The GW decides on the coding scheme and the transmitter power levels of the wireless nodes. The decision process is carried out online and adapts to varying channel conditions to improve the tradeoff between state estimation accuracy and energy expenditure. In combination with predictive......Technological advances made wireless sensors cheap and reliable enough to be brought into industrial use. A major challenge arises from the fact that wireless channels introduce random packet dropouts. Power control and coding are key enabling technologies in wireless communications to ensure...

  2. Project of decree relative to the licensing and statement system of nuclear activities and to their control and bearing various modifications of the public health code and working code; Projet de decret relatif au regime d'autorisation et de declaration des activites nucleaires et a leur controle et portant diverses modifications du code de la sante publique et du code du travail

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    This decree concerns the control of high level sealed radioactive sources and orphan sources. It has for objective to introduce administrative simplification, especially the radiation sources licensing and statement system, to reinforce the control measures planed by the public health code and by the employment code, to bring precision and complements in the editing of several already existing arrangements. (N.C.)

  3. Advantages of Westinghouse BWR control rod drop accidents methodology utilizing integrated POLCA-T code

    International Nuclear Information System (INIS)

    Panayotov, Dobromir

    2008-01-01

    The paper focuses on the activities pursued by Westinghouse in the development and licensing of POLCA-T code Control Rod Drop Accident (CRDA) Methodology. The comprehensive CRDA methodology that utilizes PHOENIX4/POLCA7/POLCA-T calculation chain foresees complete cycle-specific analysis. The methodology consists of determination of candidates of control rods (CR) that could cause a significant reactivity excursion if dropped throughout the entire fuel cycle, selection of limiting initial conditions for CRDA transient simulation and transient simulation itself. The Westinghouse methodology utilizes state-of-the-art methods. Unnecessary conservatisms in the methodology have been avoided to allow the accurate prediction of margin to design bases. This is mainly achieved by using the POLCA-T code for dynamic CRDA evaluations. The code belongs to the same calculation chain that is used for core design. Thus the very same reactor, core, cycle and fuel data base is used. This allows also reducing the uncertainties of input data and parameters that determine the energy deposition in the fuel. Uncertainty treatment, very selective use of conservatisms, selection of the initial conditions for limiting case analyses, incorporation into POLCA-T code models of the licensed fuel performance code are also among the means of performing realistic CRDA transient analyses. (author)

  4. 16-bit error detection and correction (EDAC) controller design using FPGA for critical memory applications

    International Nuclear Information System (INIS)

    Misra, M.K.; Sridhar, N.; Krishnakumar, B.; Ilango Sambasivan, S.

    2002-01-01

    Full text: Complex electronic systems require the utmost reliability, especially when the storage and retrieval of critical data demands faultless operation, the system designer must strive for the highest reliability possible. Extra effort must be expended to achieve this reliability. Fortunately, not all systems must operate with these ultra reliability requirements. The majority of systems operate in an area where system failure is not hazardous. But the applications like nuclear reactors, medical and avionics are the areas where system failure may prove to have harsh consequences. High-density memories generate errors in their stored data due to external disturbances like power supply surges, system noise, natural radiation etc. These errors are called soft errors or transient errors, since they don't cause permanent damage to the memory cell. Hard errors may also occur on system memory boards. These hard errors occur if one RAM component or RAM cell fails and is stuck at either 0 or 1. Although less frequent, hard errors may cause a complete system failure. These are the major problems associated with memories

  5. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  6. 27 CFR 73.12 - What security controls must I use for identification codes and passwords?

    Science.gov (United States)

    2010-04-01

    ... Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY (CONTINUED) PROCEDURES AND PRACTICES ELECTRONIC SIGNATURES; ELECTRONIC SUBMISSION OF FORMS Electronic Signatures § 73.12 What security controls must I use for identification codes and passwords? If you use electronic signatures based upon use...

  7. Rate Control for Network-Coded Multipath Relaying with Time-Varying Connectivity

    Science.gov (United States)

    2010-12-10

    Armen Babikyan, Nathaniel M. Jones, Thomas H. Shake, and Andrew P. Worthen MIT Lincoln Laboratory 244 Wood Street Lexington, MA 02420 DDRE, 1777...delay U U U U SAR 11 Zach Sweet 781-981-5997 1 Rate Control for Network-Coded Multipath Relaying with Time-Varying Connectivity Brooke Shrader, Armen

  8. Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback.

    Directory of Open Access Journals (Sweden)

    Ing-Shiou Hwang

    Full Text Available Discharge patterns from a population of motor units (MUs were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF. In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13-35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band.

  9. Quality control of the packet of RELAP5/MOD2 code

    International Nuclear Information System (INIS)

    Pomier Baez, L.E.

    1993-01-01

    The methodology that should be used to perform the quality control of entrance data set of RELAP5 calculation code is expounded in this work with this control method an extreme reliability is quarantined in the calculation model established to perform the safety thermohydraulic analysis with the help of RELAP5. This makes possible the complex simulation studies of a nuclear power plant with the quality required

  10. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  11. THE OPTIMAL CONTROL IN THE MODELOF NETWORK SECURITY FROM MALICIOUS CODE

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available The paper deals with a mathematical model of network security. The model is described in terms of the nonlinear optimal control. As a criterion of the control problem quality the price of the summary damage inflicted by the harmful codes is chosen, under additional restriction: the number of recovered nodes is maximized. The Pontryagin maximum principle for construction of the optimal decisions is formulated. The number of switching points of the optimal control is found. The explicit form of optimal control is given using the Lagrange multipliers method.

  12. UNSPEC: revisited (semaphore code)

    International Nuclear Information System (INIS)

    Neifert, R.D.

    1981-01-01

    The UNSPEC code is used to solve the problem of unfolding an observed x-ray spectrum given the response matrix of the measuring system and the measured signal values. UNSPEC uses an iterative technique to solve the unfold problem. Due to experimental errors in the measured signal values and/or computer round-off errors, discontinuities and oscillatory behavior may occur in the iterated spectrum. These can be suppressed by smoothing the results after each iteration. Input/output options and control cards are explained; sample input and output are provided

  13. A Reaction-Diffusion-Based Coding Rate Control Mechanism for Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Naoki Wakamiya

    2010-08-01

    Full Text Available A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  14. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    Science.gov (United States)

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  15. Analysis of error-correction constraints in an optical disk

    Science.gov (United States)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  16. Brain mechanisms of self-control: A neurocognitive investigation of reward-based action control and error awareness

    NARCIS (Netherlands)

    Harsay, H.A.

    2014-01-01

    Motivation and the ability to detect errors are critical for the interaction with our environment. They provide us with the opportunity to engage in purposive, persistent and corrective behavior, and to take the consequences of our actions into account. Diminished motivation and error awareness have

  17. A Case–Control Study Investigating Simulated Driving Errors in Ischemic Stroke and Subarachnoid Hemorrhage

    Directory of Open Access Journals (Sweden)

    Megan A. Hird

    2018-02-01

    Full Text Available BackgroundStroke can affect a variety of cognitive, perceptual, and motor abilities that are important for safe driving. Results of studies assessing post-stroke driving ability are quite variable in the areas and degree of driving impairment among patients. This highlights the need to consider clinical characteristics, including stroke subtype, when assessing driving performance.MethodsWe compared the simulated driving performance of 30 chronic stroke patients (>3 months, including 15 patients with ischemic stroke (IS and 15 patients with subarachnoid hemorrhage (SAH, and 20 age-matched controls. A preliminary analysis was performed, subdividing IS patients into right (n = 8 and left (n = 6 hemispheric lesions and SAH patients into middle cerebral artery (MCA, n = 5 and anterior communicating artery (n = 6 territory. A secondary analysis was conducted to investigate the cognitive correlates of driving.ResultsNine patients (30% exhibited impaired simulated driving performance, including four patients with IS (26.7% and five patients with SAH (33.3%. Both patients with IS (2.3 vs. 0.3, U = 76, p < 0.05 and SAH (1.5 vs. 0.3, U = 45, p < 0.001 exhibited difficulty with lane maintenance (% distance out of lane compared to controls. In addition, patients with IS exhibited difficulty with speed maintenance (% distance over speed limit; 8.9 vs. 4.1, U = 81, p < 0.05, whereas SAH patients exhibited difficulty with turning performance (total turning errors; 5.4 vs. 1.6, U = 39.5, p < 0.001. The Trail Making Test (TMT and Useful Field of View test were significantly associated with lane maintenance among patients with IS (rs > 0.6, p < 0.05. No cognitive tests showed utility among patients with SAH.ConclusionBoth IS and SAH exhibited difficulty with lane maintenance. Patients with IS additionally exhibited difficulty with speed maintenance, whereas SAH patients exhibited difficulty with turning

  18. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  19. An FPGA-Based High-Speed Error Resilient Data Aggregation and Control for High Energy Physics Experiment

    Science.gov (United States)

    Mandal, Swagata; Saini, Jogender; Zabołotny, Wojciech M.; Sau, Suman; Chakrabarti, Amlan; Chattopadhyay, Subhasis

    2017-03-01

    Due to the dramatic increase of data volume in modern high energy physics (HEP) experiments, a robust high-speed data acquisition (DAQ) system is very much needed to gather the data generated during different nuclear interactions. As the DAQ works under harsh radiation environment, there is a fair chance of data corruption due to various energetic particles like alpha, beta, or neutron. Hence, a major challenge in the development of DAQ in the HEP experiment is to establish an error resilient communication system between front-end sensors or detectors and back-end data processing computing nodes. Here, we have implemented the DAQ using field-programmable gate array (FPGA) due to some of its inherent advantages over the application-specific integrated circuit. A novel orthogonal concatenated code and cyclic redundancy check (CRC) have been used to mitigate the effects of data corruption in the user data. Scrubbing with a 32-b CRC has been used against error in the configuration memory of FPGA. Data from front-end sensors will reach to the back-end processing nodes through multiple stages that may add an uncertain amount of delay to the different data packets. We have also proposed a novel memory management algorithm that helps to process the data at the back-end computing nodes removing the added path delays. To the best of our knowledge, the proposed FPGA-based DAQ utilizing optical link with channel coding and efficient memory management modules can be considered as first of its kind. Performance estimation of the implemented DAQ system is done based on resource utilization, bit error rate, efficiency, and robustness to radiation.

  20. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  1. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  2. MatLab script to C code converter for embedded processors of FLASH LLRF control system

    Science.gov (United States)

    Bujnowski, K.; Siemionczyk, A.; Pucyk, P.; Szewiński, J.; Pożniak, K. T.; Romaniuk, R. S.

    2008-01-01

    The low level RF control system (LLRF) of FEL serves for stabilization of the electromagnetic (EM) field in the superconducting niobium, resonant, microwave cavities and for controlling high power (MW) klystron. LLRF system of FLASH accelerator bases on FPGA technology and embedded microprocessors. Basic and auxiliary functions of the systems are listed as well as used algorithms for superconductive cavity parameters identification. These algorithms were prepared originally in Matlab. The main part of the paper presents implementation of the cavity parameters identification algorithm in a PowerPC processor embedded in the FPGA circuit VirtexIIPro. A construction of a very compact Matlab script converter to C code was presented, referred to as M2C. The application is designed specifically for embedded systems of very confined resources. The generated code is optimized for the weight. The code should be transferable between different hardware platforms. The converter generates a code for Linux and for stand-alone applications. Functional structure of the program was described and the way it is acting. FLEX and BIZON tools were used for construction of the converter. The paper concludes with an example of the M2C application to convert a complex identification algorithm for superconductive cavities in FLASH laser.

  3. Bayesian networks modeling for thermal error of numerical control machine tools

    Institute of Scientific and Technical Information of China (English)

    Xin-hua YAO; Jian-zhong FU; Zi-chen CHEN

    2008-01-01

    The interaction between the heat source location,its intensity,thermal expansion coefficient,the machine system configuration and the running environment creates complex thermal behavior of a machine tool,and also makes thermal error prediction difficult.To address this issue,a novel prediction method for machine tool thermal error based on Bayesian networks (BNs) was presented.The method described causal relationships of factors inducing thermal deformation by graph theory and estimated the thermal error by Bayesian statistical techniques.Due to the effective combination of domain knowledge and sampled data,the BN method could adapt to the change of running state of machine,and obtain satisfactory prediction accuracy.Ex-periments on spindle thermal deformation were conducted to evaluate the modeling performance.Experimental results indicate that the BN method performs far better than the least squares(LS)analysis in terms of modeling estimation accuracy.

  4. General Unified Integral Controller with Zero Steady-State Error for Single-Phase Grid-Connected Inverters

    DEFF Research Database (Denmark)

    Guo, Xiaoqiang; Guerrero, Josep M.

    2016-01-01

    Current regulation is crucial for operating single-phase grid-connected inverters. The challenge of the current controller is how to fast and precisely tracks the current with zero steady-state error. This paper proposes a novel feedback mechanism for the conventional PI controller. It allows...... done indicates that the widely used PR (P+Resonant) control is just a special case of the proposed control solution. The time-domain simulation in Matlab/Simulink and experimental results from a TMS320F2812 DSP based laboratory prototypes are in good agreement, which verify the effectiveness...

  5. Output regulation control for switched stochastic delay systems with dissipative property under error-dependent switching

    Science.gov (United States)

    Li, L. L.; Jin, C. L.; Ge, X.

    2018-01-01

    In this paper, the output regulation problem with dissipative property for a class of switched stochastic delay systems is investigated, based on an error-dependent switching law. Under the assumption that none subsystem is solvable for the problem, a sufficient condition is derived by structuring multiple Lyapunov-Krasovskii functionals with respect to multiple supply rates, via designing error feedback regulators. The condition is also established when dissipative property reduces to passive property. Finally, two numerical examples are given to demonstrate the feasibility and efficiency of the present method.

  6. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  7. Reducing Check-in Errors at Brigham Young University through Statistical Process Control

    Science.gov (United States)

    Spackman, N. Andrew

    2005-01-01

    The relationship between the library and its patrons is damaged and the library's reputation suffers when returned items are not checked in. An informal survey reveals librarians' concern for this problem and their efforts to combat it, although few libraries collect objective measurements of errors or the effects of improvement efforts. Brigham…

  8. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak

    2017-01-01

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  9. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly

    2017-01-05

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  10. A mean field theory of coded CDMA systems

    International Nuclear Information System (INIS)

    Yano, Toru; Tanaka, Toshiyuki; Saad, David

    2008-01-01

    We present a mean field theory of code-division multiple-access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean-field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems

  11. A mean field theory of coded CDMA systems

    Energy Technology Data Exchange (ETDEWEB)

    Yano, Toru [Graduate School of Science and Technology, Keio University, Hiyoshi, Kohoku-ku, Yokohama-shi, Kanagawa 223-8522 (Japan); Tanaka, Toshiyuki [Graduate School of Informatics, Kyoto University, Yoshida Hon-machi, Sakyo-ku, Kyoto-shi, Kyoto 606-8501 (Japan); Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)], E-mail: yano@thx.appi.keio.ac.jp

    2008-08-15

    We present a mean field theory of code-division multiple-access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean-field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems.

  12. Feedback error learning controller for functional electrical stimulation assistance in a hybrid robotic system for reaching rehabilitation

    Directory of Open Access Journals (Sweden)

    Francisco Resquín

    2016-07-01

    Full Text Available Hybrid robotic systems represent a novel research field, where functional electrical stimulation (FES is combined with a robotic device for rehabilitation of motor impairment. Under this approach, the design of robust FES controllers still remains an open challenge. In this work, we aimed at developing a learning FES controller to assist in the performance of reaching movements in a simple hybrid robotic system setting. We implemented a Feedback Error Learning (FEL control strategy consisting of a feedback PID controller and a feedforward controller based on a neural network. A passive exoskeleton complemented the FES controller by compensating the effects of gravity. We carried out experiments with healthy subjects to validate the performance of the system. Results show that the FEL control strategy is able to adjust the FES intensity to track the desired trajectory accurately without the need of a previous mathematical model.

  13. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  14. A statistical approach to estimating effects of performance shaping factors on human error probabilities of soft controls

    International Nuclear Information System (INIS)

    Kim, Yochan; Park, Jinkyun; Jung, Wondea; Jang, Inseok; Hyun Seong, Poong

    2015-01-01

    Despite recent efforts toward data collection for supporting human reliability analysis, there remains a lack of empirical basis in determining the effects of performance shaping factors (PSFs) on human error probabilities (HEPs). To enhance the empirical basis regarding the effects of the PSFs, a statistical methodology using a logistic regression and stepwise variable selection was proposed, and the effects of the PSF on HEPs related with the soft controls were estimated through the methodology. For this estimation, more than 600 human error opportunities related to soft controls in a computerized control room were obtained through laboratory experiments. From the eight PSF surrogates and combinations of these variables, the procedure quality, practice level, and the operation type were identified as significant factors for screen switch and mode conversion errors. The contributions of these significant factors to HEPs were also estimated in terms of a multiplicative form. The usefulness and limitation of the experimental data and the techniques employed are discussed herein, and we believe that the logistic regression and stepwise variable selection methods will provide a way to estimate the effects of PSFs on HEPs in an objective manner. - Highlights: • It is necessary to develop an empirical basis for the effects of the PSFs on the HEPs. • A statistical method using a logistic regression and variable selection was proposed. • The effects of PSFs on the HEPs of soft controls were empirically investigated. • The significant factors were identified and their effects were estimated

  15. Controlling the error on target motion through real-time mesh adaptation: Applications to deep brain stimulation.

    Science.gov (United States)

    Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A

    2018-05-01

    An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  17. Controlled quantum secure direct communication using a non-symmetric quantum channel with quantum superdense coding

    International Nuclear Information System (INIS)

    Xia, Yan; Song, He-Shan

    2007-01-01

    We present a controlled quantum secure direct communication protocol that uses a 2-dimensional Greenberger-Horne-Zeilinger (GHZ) entangled state and a 3-dimensional Bell-basis state and employs the high-dimensional quantum superdense coding, local collective unitary operations and entanglement swapping. The proposed protocol is secure and of high source capacity. It can effectively protect the communication against a destroying-travel-qubit-type attack. With this protocol, the information transmission is greatly increased. This protocol can also be modified, so that it can be used in a multi-party control system

  18. Measurement error potential and control when quantifying volatile hydrocarbon concentrations in soils

    International Nuclear Information System (INIS)

    Siegrist, R.L.

    1991-01-01

    Due to their widespread use throughout commerce and industry, volatile hydrocarbons such as toluene, trichloroethene, and 1, 1,1-trichloroethane routinely appears as principal pollutants in contamination of soil system hydrocarbons is necessary to confirm the presence of contamination and its nature and extent; to assess site risks and the need for cleanup; to evaluate remedial technologies; and to verify the performance of a selected alternative. Decisions regarding these issues have far-reaching impacts and, ideally, should be based on accurate measurements of soil hydrocarbon concentrations. Unfortunately, quantification of volatile hydrocarbons in soils is extremely difficult and there is normally little understanding of the accuracy and precision of these measurements. Rather, the assumptions often implicitly made that the hydrocarbon data are sufficiently accurate for the intended purpose. This appear presents a discussion of measurement error potential when quantifying volatile hydrocarbons in soils, and outlines some methods for understanding the managing these errors

  19. Distinguishing stimulus and response codes in theta oscillations in prefrontal areas during inhibitory control of automated responses.

    Science.gov (United States)

    Mückschel, Moritz; Dippel, Gabriel; Beste, Christian

    2017-11-01

    Response inhibition mechanisms are mediated via cortical and subcortical networks. At the cortical level, the superior frontal gyrus, including the supplementary motor area (SMA) and inferior frontal areas, is important. There is an ongoing debate about the functional roles of these structures during response inhibition as it is unclear whether these structures process different codes or contents of information during response inhibition. In the current study, we examined this question with a focus on theta frequency oscillations during response inhibition processes. We used a standard Go/Nogo task in a sample of human participants and combined different EEG signal decomposition methods with EEG beamforming approaches. The results suggest that stimulus coding during inhibitory control is attained by oscillations in the upper theta frequency band (∼7 Hz). In contrast, response selection codes during inhibitory control appear to be attained by the lower theta frequency band (∼4 Hz). Importantly, these different codes seem to be processed in distinct functional neuroanatomical structures. Although the SMA may process stimulus codes and response selection codes, the inferior frontal cortex may selectively process response selection codes during inhibitory control. Taken together, the results suggest that different entities within the functional neuroanatomical network associated with response inhibition mechanisms process different kinds of codes during inhibitory control. These codes seem to be reflected by different oscillations within the theta frequency band. Hum Brain Mapp 38:5681-5690, 2017. © 2017 Wiley-Liss, Inc. © 2017 Wiley Periodicals, Inc.

  20. Robust a Posteriori Error Control and Adaptivity for Multiscale, Multinumerics, and Mortar Coupling

    KAUST Repository

    Pencheva, Gergina V.

    2013-01-01

    We consider discretizations of a model elliptic problem by means of different numerical methods applied separately in different subdomains, termed multinumerics, coupled using the mortar technique. The grids need not match along the interfaces. We are also interested in the multiscale setting, where the subdomains are partitioned by a mesh of size h, whereas the interfaces are partitioned by a mesh of much coarser size H, and where lower-order polynomials are used in the subdomains and higher-order polynomials are used on the mortar interface mesh. We derive several fully computable a posteriori error estimates which deliver a guaranteed upper bound on the error measured in the energy norm. Our estimates are also locally efficient and one of them is robust with respect to the ratio H/h under an assumption of sufficient regularity of the weak solution. The present approach allows bounding separately and comparing mutually the subdomain and interface errors. A subdomain/interface adaptive refinement strategy is proposed and numerically tested. © 2013 Society for Industrial and Applied Mathematics.

  1. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  2. Field-programmable beam reconfiguring based on digitally-controlled coding metasurface

    Science.gov (United States)

    Wan, Xiang; Qi, Mei Qing; Chen, Tian Yi; Cui, Tie Jun

    2016-02-01

    Digital phase shifters have been applied in traditional phased array antennas to realize beam steering. However, the phase shifter deals with the phase of the induced current; hence, it has to be in the path of each element of the antenna array, making the phased array antennas very expensive. Metamaterials and/or metasurfaces enable the direct modulation of electromagnetic waves by designing subwavelength structures, which opens a new way to control the beam scanning. Here, we present a direct digital mechanism to control the scattered electromagnetic waves using coding metasurface, in which each unit cell loads a pin diode to produce binary coding states of “1” and “0”. Through data lines, the instant communications are established between the coding metasurface and the internal memory of field-programmable gate arrays (FPGA). Thus, we realize the digital modulation of electromagnetic waves, from which we present the field-programmable reflective antenna with good measurement performance. The proposed mechanism and functional device have great application potential in new-concept radar and communication systems.

  3. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

    Science.gov (United States)

    Sargent, Jeff Scott

    1988-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

  4. Improving Code Quality of the Compact Muon Solenoid Electromagnetic Calorimeter Control Software to Increase System Maintainability

    CERN Multimedia

    Holme, Oliver; Dissertori, Günther; Djambazov, Lubomir; Lustermann, Werner; Zelepoukine, Serguei

    2013-01-01

    The Detector Control System (DCS) software of the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at CERN is designed primarily to enable safe and efficient operation of the detector during Large Hadron Collider (LHC) data-taking periods. Through a manual analysis of the code and the adoption of ConQAT [1], a software quality assessment toolkit, the CMS ECAL DCS team has made significant progress in reducing complexity and improving code quality, with observable results in terms of a reduction in the effort dedicated to software maintenance. This paper explains the methodology followed, including the motivation to adopt ConQAT, the specific details of how this toolkit was used and the outcomes that have been achieved. [1] ConQAT, Continuous Quality Assessment Toolkit; https://www.conqat.org/

  5. Resistive wall mode feedback control in EXTRAP T2R with improved steady-state error and transient response

    Science.gov (United States)

    Brunsell, P. R.; Olofsson, K. E. J.; Frassinetti, L.; Drake, J. R.

    2007-10-01

    Experiments in the EXTRAP T2R reversed field pinch [P. R. Brunsell, H. Bergsåker, M. Cecconello et al., Plasma Phys. Control. Fusion 43, 1457 (2001)] on feedback control of m =1 resistive wall modes (RWMs) are compared with simulations using the cylindrical linear magnetohydrodynamic model, including the dynamics of the active coils and power amplifiers. Stabilization of the main RWMs (n=-11,-10,-9,-8,+5,+6) is shown using modest loop gains of the order G ˜1. However, other marginally unstable RWMs (n=-2,-1,+1,+2) driven by external field errors are only partially canceled at these gains. The experimental system stability limit is confirmed by simulations showing that the latency of the digital controller ˜50μs is degrading the system gain margin. The transient response is improved with a proportional-plus-derivative controller, and steady-state error is improved with a proportional-plus-integral controller. Suppression of all modes is obtained at high gain G ˜10 using a proportional-plus-integral-plus-derivative controller.

  6. Process-engineering control valves under the EC codes; Steuerventile fuer die Prozesstechnik im Geltungsbereich der EG-Richtlinien

    Energy Technology Data Exchange (ETDEWEB)

    Gohlke, B. [IMI Norgren Herion Fluidtronic GmbH und Co. KG, Fellbach (Germany)

    2003-09-01

    The European Parliament and European Council have enacted special codes in order to implement uniform conditions in all countries of the European Community. The manufacturers of technical and commercial products are obliged to adhere to these codes. Harmonized standards, which are to be used as a tool for the implementation of the codes, are embedded at another level of the overall 'European reference literature'. Two EC codes, in particular, are definitive for fluids engineering: On the one hand, the EC Machinery Code, 98/37/EC and, on the other hand, the EC Pressurized Equipment Code, 97/23/EC. These EC codes cover, inter alia, machinery and chemical process-engineering plants, and conventional power generating plants. Norgren-Herion, a manufacturer of fluid engineering components, perceived a necessity for positioning its control valves in the scope of applicability of the EC codes. This article describes experience with the EC codes from the control valve manufacturer's point of view and examines the various qualification procedures for control valves. (orig.)

  7. Projective Synchronization of N-Dimensional Chaotic Fractional-Order Systems via Linear State Error Feedback Control

    Directory of Open Access Journals (Sweden)

    Baogui Xin

    2012-01-01

    Full Text Available Based on linear feedback control technique, a projective synchronization scheme of N-dimensional chaotic fractional-order systems is proposed, which consists of master and slave fractional-order financial systems coupled by linear state error variables. It is shown that the slave system can be projectively synchronized with the master system constructed by state transformation. Based on the stability theory of linear fractional order systems, a suitable controller for achieving synchronization is designed. The given scheme is applied to achieve projective synchronization of chaotic fractional-order financial systems. Numerical simulations are given to verify the effectiveness of the proposed projective synchronization scheme.

  8. A Brain Computer Interface for Robust Wheelchair Control Application Based on Pseudorandom Code Modulated Visual Evoked Potential

    DEFF Research Database (Denmark)

    Mohebbi, Ali; Engelsholm, Signe K.D.; Puthusserypady, Sadasivan

    2015-01-01

    In this pilot study, a novel and minimalistic Brain Computer Interface (BCI) based wheelchair control application was developed. The system was based on pseudorandom code modulated Visual Evoked Potentials (c-VEPs). The visual stimuli in the scheme were generated based on the Gold code...

  9. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    Science.gov (United States)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  10. Human error and the associated recovery probabilities for soft control being used in the advanced MCRs of NPPs

    International Nuclear Information System (INIS)

    Jang, Inseok; Jung, Wondea; Seong, Poong Hyun

    2016-01-01

    Highlights: • The operation environment of MCRs in NPPs has changed by adopting digital HSIs. • Most current HRA databases are not explicitly designed to deal with digital HSI. • Empirical analysis for new HRA DB under an advanced MCR mockup are carried. • It is expected that the results can be used for advanced MCR HRA. - Abstract: Since the Three Mile Island (TMI)-2 accident, human error has been recognized as one of the main causes of Nuclear Power Plant (NPP) accidents, and numerous studies related to Human Reliability Analysis (HRA) have been carried out. Most of these studies were focused on considering the conventional Main Control Room (MCR) environment. However, the operating environment of MCRs in NPPs has changed with the adoption of new human-system interfaces (HSI) largely based on up-to-date digital technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called advanced MCRs. Among the many features of advanced MCRs, soft controls are a particularly important because operating actions in advanced MCRs are performed by soft control. Due to the difference in interfaces between soft control and hardwired conventional controls, different HEP should be used in the HRA for advanced MCRs. Unfortunately, most current HRA databases deal with operations in conventional MCRs and are not explicitly designed to deal with digital Human System Interface (HSI). For this reason, empirical human error and the associated error recovery probabilities were collected from the mockup of an advanced MCR equipped with soft controls. To this end, small-scaled experiments are conducted with 48 graduated students in the department of nuclear engineering in Korea Advanced Institute of Science and Technology (KAIST) are participated, and accident scenarios are designed with respect to the typical Design Basis Accidents (DBAs) in NPPs, such as Steam Generator Tube Rupture

  11. Transient calculation performance of the MASTER code for control rod ejection problem

    International Nuclear Information System (INIS)

    Cho, B. O.; Joo, H. G.; Yoo, Y. J.; Park, S. Y.; Zee, S. Q.

    1999-01-01

    The accuracy and the effectiveness of the solution methods of the MASTER code for reactor transient problems were analyzed with a set of NEACRP PWR control rod ejection benchmark problems. A series of sensitivity study for the effects on the solution by the neutronic solution methods and the neutronic and thermal-hydraulic model parameters were thus investigated. The MASTER results were then compared with the reference PANTHER results. This indicates that the MASTER solution is sufficiently accurate and the computing time is fast enough for nuclear design application

  12. Transient calculation performance of the MASTER code for control rod ejection problem

    Energy Technology Data Exchange (ETDEWEB)

    Cho, B. O.; Joo, H. G.; Yoo, Y. J.; Park, S. Y.; Zee, S. Q. [KAERI, Taejon (Korea, Republic of)

    1999-10-01

    The accuracy and the effectiveness of the solution methods of the MASTER code for reactor transient problems were analyzed with a set of NEACRP PWR control rod ejection benchmark problems. A series of sensitivity study for the effects on the solution by the neutronic solution methods and the neutronic and thermal-hydraulic model parameters were thus investigated. The MASTER results were then compared with the reference PANTHER results. This indicates that the MASTER solution is sufficiently accurate and the computing time is fast enough for nuclear design application.

  13. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  14. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    Science.gov (United States)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  15. An Enhanced Intelligent Handheld Instrument with Visual Servo Control for 2-DOF Hand Motion Error Compensation

    Directory of Open Access Journals (Sweden)

    Yan Naing Aye

    2013-10-01

    Full Text Available The intelligent handheld instrument, ITrem2, enhances manual positioning accuracy by cancelling erroneous hand movements and, at the same time, provides automatic micromanipulation functions. Visual data is acquired from a high speed monovision camera attached to the optical surgical microscope and acceleration measurements are acquired from the inertial measurement unit (IMU on board ITrem2. Tremor estimation and canceling is implemented via Band-limited Multiple Fourier Linear Combiner (BMFLC filter. The piezoelectric actuated micromanipulator in ITrem2 generates the 3D motion to compensate erroneous hand motion. Preliminary bench-top 2-DOF experiments have been conducted. The error motions simulated by a motion stage is reduced by 67% for multiple frequency oscillatory motions and 56.16% for pre-conditioned recorded physiological tremor.

  16. Active control of aircraft engine inlet noise using compact sound sources and distributed error sensors

    Science.gov (United States)

    Burdisso, Ricardo (Inventor); Fuller, Chris R. (Inventor); O'Brien, Walter F. (Inventor); Thomas, Russell H. (Inventor); Dungan, Mary E. (Inventor)

    1996-01-01

    An active noise control system using a compact sound source is effective to reduce aircraft engine duct noise. The fan noise from a turbofan engine is controlled using an adaptive filtered-x LMS algorithm. Single multi channel control systems are used to control the fan blade passage frequency (BPF) tone and the BPF tone and the first harmonic of the BPF tone for a plane wave excitation. A multi channel control system is used to control any spinning mode. The multi channel control system to control both fan tones and a high pressure compressor BPF tone simultaneously. In order to make active control of turbofan inlet noise a viable technology, a compact sound source is employed to generate the control field. This control field sound source consists of an array of identical thin, cylindrically curved panels with an inner radius of curvature corresponding to that of the engine inlet. These panels are flush mounted inside the inlet duct and sealed on all edges to prevent leakage around the panel and to minimize the aerodynamic losses created by the addition of the panels. Each panel is driven by one or more piezoelectric force transducers mounted on the surface of the panel. The response of the panel to excitation is maximized when it is driven at its resonance; therefore, the panel is designed such that its fundamental frequency is near the tone to be canceled, typically 2000-4000 Hz.

  17. A Real-Time Accurate Model and Its Predictive Fuzzy PID Controller for Pumped Storage Unit via Error Compensation

    Directory of Open Access Journals (Sweden)

    Jianzhong Zhou

    2017-12-01

    Full Text Available Model simulation and control of pumped storage unit (PSU are essential to improve the dynamic quality of power station. Only under the premise of the PSU models reflecting the actual transient process, the novel control method can be properly applied in the engineering. The contributions of this paper are that (1 a real-time accurate equivalent circuit model (RAECM of PSU via error compensation is proposed to reconcile the conflict between real-time online simulation and accuracy under various operating conditions, and (2 an adaptive predicted fuzzy PID controller (APFPID based on RAECM is put forward to overcome the instability of conventional control under no-load conditions with low water head. Respectively, all hydraulic factors in pipeline system are fully considered based on equivalent lumped-circuits theorem. The pretreatment, which consists of improved Suter-transformation and BP neural network, and online simulation method featured by two iterative loops are synthetically proposed to improve the solving accuracy of the pump-turbine. Moreover, the modified formulas for compensating error are derived with variable-spatial discretization to improve the accuracy of the real-time simulation further. The implicit RadauIIA method is verified to be more suitable for PSUGS owing to wider stable domain. Then, APFPID controller is constructed based on the integration of fuzzy PID and the model predictive control. Rolling prediction by RAECM is proposed to replace rolling optimization with its computational speed guaranteed. Finally, the simulation and on-site measurements are compared to prove trustworthy of RAECM under various running conditions. Comparative experiments also indicate that APFPID controller outperforms other controllers in most cases, especially low water head conditions. Satisfying results of RAECM have been achieved in engineering and it provides a novel model reference for PSUGS.

  18. Force to Rebalance Control of HRG and Suppression of Its Errors on the Basis of FPGA

    Directory of Open Access Journals (Sweden)

    Qingan Jiang

    2011-12-01

    Full Text Available A novel design of force to rebalance control for a hemispherical resonator gyro (HRG based on FPGA is demonstrated in this paper. The proposed design takes advantage of the automatic gain control loop and phase lock loop configuration in the drive mode while making full use of the quadrature control loop and rebalance control loop in controlling the oscillating dynamics in the sense mode. First, the math model of HRG with inhomogeneous damping and frequency split is theoretically analyzed. In addition, the major drift mechanisms in the HRG are described and the methods that can suppress the gyro drift are mentioned. Based on the math model and drift mechanisms suppression method, four control loops are employed to realize the manipulation of the HRG by using a FPGA circuit. The reference-phase loop and amplitude control loop are used to maintain the vibration of primary mode at its natural frequency with constant amplitude. The frequency split is readily eliminated by the quadrature loop with a DC voltage feedback from the quadrature component of the node. The secondary mode response to the angle rate input is nullified by the rebalance control loop. In order to validate the effect of the digital control of HRG, experiments are carried out with a turntable. The experimental results show that the design is suitable for the control of HRG which has good linearity scale factor and bias stability.

  19. Force to rebalance control of HRG and suppression of its errors on the basis of FPGA.

    Science.gov (United States)

    Wang, Xu; Wu, Wenqi; Luo, Bing; Fang, Zhen; Li, Yun; Jiang, Qingan

    2011-01-01

    A novel design of force to rebalance control for a hemispherical resonator gyro (HRG) based on FPGA is demonstrated in this paper. The proposed design takes advantage of the automatic gain control loop and phase lock loop configuration in the drive mode while making full use of the quadrature control loop and rebalance control loop in controlling the oscillating dynamics in the sense mode. First, the math model of HRG with inhomogeneous damping and frequency split is theoretically analyzed. In addition, the major drift mechanisms in the HRG are described and the methods that can suppress the gyro drift are mentioned. Based on the math model and drift mechanisms suppression method, four control loops are employed to realize the manipulation of the HRG by using a FPGA circuit. The reference-phase loop and amplitude control loop are used to maintain the vibration of primary mode at its natural frequency with constant amplitude. The frequency split is readily eliminated by the quadrature loop with a DC voltage feedback from the quadrature component of the node. The secondary mode response to the angle rate input is nullified by the rebalance control loop. In order to validate the effect of the digital control of HRG, experiments are carried out with a turntable. The experimental results show that the design is suitable for the control of HRG which has good linearity scale factor and bias stability.

  20. Analysis of the burnup of the control rods with the COREMASTER-Presto code

    International Nuclear Information System (INIS)

    Hernandez, J.L.; Alonso, G.; Perusquia, R.; Montes, J.L.; Hernandez, H.

    2003-01-01

    An evaluation of the capacity of the COREMASTER-Presto code, to evaluate generically the burnt of the control bars in the Laguna Verde reactors plant (CLV) is made. It was found that the code only reports burnt values of the control rods in MWD/TM, in spite of having with a second order polynomial model, for the conversion to remainder of the Boron-10 (B-10). It was observed that said model is adequate only for burnt smaller to 45,000 MWD/TM. To evaluate the burnt of the control rods it was reproduced the balance cycle of 18 months for the CLV, executing Cm-Presto during 13 consecutive cycles. First without rod burnt, taking this as the base case. Later on, cases with 1, 2 and up to 13 cycles with rod burnt were generated. When comparing results it was observed that the control rods pattern it loses reactivity lineally with the burnt one. By each 10 G Wd/T of burnt of the nucleus it is decreased the reactivity of the pattern rods ∼ 1 pcm in hot condition and of ∼ 20 pcm in cold condition. When burning three cycles those rods more burnt reached the 13,900 MWD/TM, equivalent to 36% of B-10 reduction, near value to 34% proposed by aging in the one lost study of B-10. It was observed that Cm-Presto it doesn't burn the superior node of the control rods when these are completely extracted. A one big lost of B-10, of the order of 50%, it represents only a decrease of 11% of the reactivity value of the rod. One can affirm that even when it is strongly decreased the content of B-10, the rod is continue considering as a black absorber, that is to say, thermal neutron that enters in the neutron rod that is absorbed. (Author)

  1. A Benchmark Study on Error Assessment and Quality Control of CCS Reads Derived from the PacBio RS.

    Science.gov (United States)

    Jiao, Xiaoli; Zheng, Xin; Ma, Liang; Kutty, Geetha; Gogineni, Emile; Sun, Qiang; Sherman, Brad T; Hu, Xiaojun; Jones, Kristine; Raley, Castle; Tran, Bao; Munroe, David J; Stephens, Robert; Liang, Dun; Imamichi, Tomozumi; Kovacs, Joseph A; Lempicki, Richard A; Huang, Da Wei

    2013-07-31

    PacBio RS, a newly emerging third-generation DNA sequencing platform, is based on a real-time, single-molecule, nano-nitch sequencing technology that can generate very long reads (up to 20-kb) in contrast to the shorter reads produced by the first and second generation sequencing technologies. As a new platform, it is important to assess the sequencing error rate, as well as the quality control (QC) parameters associated with the PacBio sequence data. In this study, a mixture of 10 prior known, closely related DNA amplicons were sequenced using the PacBio RS sequencing platform. After aligning Circular Consensus Sequence (CCS) reads derived from the above sequencing experiment to the known reference sequences, we found that the median error rate was 2.5% without read QC, and improved to 1.3% with an SVM based multi-parameter QC method. In addition, a De Novo assembly was used as a downstream application to evaluate the effects of different QC approaches. This benchmark study indicates that even though CCS reads are post error-corrected it is still necessary to perform appropriate QC on CCS reads in order to produce successful downstream bioinformatics analytical results.

  2. Evaluation of Tehran research reactor (TRR) control rod worth using MCNP4C computer code

    International Nuclear Information System (INIS)

    Hosseini, Mohammad; Vosoughi, Naser; Hosseini, Seyed Abolfazl

    2010-01-01

    The main objective of reactor control system is to provide a safe reactor starting up, operation and shutting down. Calculation or measurement of precise values of control rod worth is of great importance in Tehran Research Reactor (TRR), considering the fact that they are the only controlling tools in the reactor. In present paper, simulation of TRR in First Operation Cycle (FOC) and in cold and clean core for the calculation of total and integral worth of control nods is reported. MCNP4C computer code has been used for all simulation process. Two method have been used for control rods worth calculation in this paper, namely the direct approach and perturbation method. It is shown that while the direct approach is appropriate for worth calculation of both the shim and the regulating control rods, the perturbation method is just suitable for tiny reactivity changes, i.e. for small initial part of regulating rods. Results of simulation are compared with the reported data in Safety Analysis Report (SAR) of Tehran research reactor and showed satisfactory agreement. (author)

  3. Repetition code of 15 qubits

    Science.gov (United States)

    Wootton, James R.; Loss, Daniel

    2018-05-01

    The repetition code is an important primitive for the techniques of quantum error correction. Here we implement repetition codes of at most 15 qubits on the 16 qubit ibmqx3 device. Each experiment is run for a single round of syndrome measurements, achieved using the standard quantum technique of using ancilla qubits and controlled operations. The size of the final syndrome is small enough to allow for lookup table decoding using experimentally obtained data. The results show strong evidence that the logical error rate decays exponentially with code distance, as is expected and required for the development of fault-tolerant quantum computers. The results also give insight into the nature of noise in the device.

  4. Edge placement error control and Mask3D effects in High-NA anamorphic EUV lithography

    Science.gov (United States)

    van Setten, Eelco; Bottiglieri, Gerardo; de Winter, Laurens; McNamara, John; Rusu, Paul; Lubkoll, Jan; Rispens, Gijsbert; van Schoot, Jan; Neumann, Jens Timo; Roesch, Matthias; Kneer, Bernhard

    2017-10-01

    To enable cost-effective shrink at the 3nm node and beyond, and to extend Moore's law into the next decade, ASML is developing a new high-NA EUV platform. The high-NA system is targeted to feature a numerical aperture (NA) of 0.55 to extend the single exposure resolution limit to 8nm half pitch. The system is being designed to achieve an on-product-overlay (OPO) performance well below 2nm, a high image contrast to drive down local CD errors and to obtain global CDU at sub-1nm level to be able to meet customer edge placement error (EPE) requirements for the devices of the future. EUV scanners employ reflective Bragg multi-layer mirrors in the mask and in the Projection Optics Box (POB) that is used to project the mask pattern into the photoresist on the silicon wafer. These MoSi multi-layer mirrors are tuned for maximum reflectivity, and thus productivity, at 13.5nm wavelength. The angular range of incident light for which a high reflectivity at the reticle can be obtained is limited to +/- 11o, exceeding the maximum angle occurring in current 0.33NA scanners at 4x demagnification. At 0.55NA the maximum angle at reticle level would extend up to 17o in the critical (scanning) direction and compromise the imaging performance of horizontal features severely. To circumvent this issue a novel anamorphic optics design has been introduced, which has a 4x demagnification in the X- (slit) direction and 8x demagnification in the Y- (scanning) direction as well as a central obscuration in the exit pupil. In this work we will show that the EUV high-NA anamorphic concept can successfully solve the angular reflectivity issues and provide good imaging performance in both directions. Several unique imaging challenges in comparison to the 0.33NA isomorphic baseline are being studied, such as the impact of the central obscuration in the POB and Mask-3D effects at increased NA that seem most pronounced for vertical features. These include M3D induced contrast loss and non

  5. DISTURBANCE ERROR INVARIANCE IN AUTOMATIC CONTROL SYSTEMS FOR TECHNOLOGICAL OBJECT TRAJECTORY MOVEMENT

    Directory of Open Access Journals (Sweden)

    A. V. Lekareva

    2016-09-01

    Full Text Available We consider combined control in automatic control systems for technological objects trajectory movements. We present research results of the system disturbance invariance ensuring on the example of the technological manipulator that implements hydrocutting of the oil pipelines. Control is based on the propositions of the fourth modified invariance form with the use of bootstrapping methods. The paper presents analysis of results obtained by two different correction methods. The essence of the first method lies in injection of additional component into the already established control signal and formation of the channel for that component. Control signal correction during the signal synthesis stage in the control device constitutes the basis for the second method. Research results have shown high efficiency of application for both correction methods. Both methods have roughly the same precision. We have shown that the correction in the control device is preferable because it has no influence on the inner contour of the system. We have shown the necessity of the block usage with the variable transmission coefficient, which value is determined by technological trajectory parameters. Research results can be applied in practice for improvement of the precision specifications of automatic control systems for trajectorial manipulators.

  6. An audit strategy for time-to-event outcomes measured with error: application to five randomized controlled trials in oncology.

    Science.gov (United States)

    Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari

    2013-10-01

    Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.

  7. Fiducial registration error as a statistical process control metric in image-guided radiotherapy with prostatic markers

    International Nuclear Information System (INIS)

    Ung, M.N.; Wee, Leonard

    2010-01-01

    Full text: Portal imaging of implanted fiducial markers has been in use for image-guided radiotherapy (TORT) of prostate cancer, with ample attention to localization accuracy and organ motion. The geometric uncertainties in point-based rigid-body (PBRB) image registration during localization of prostate fiducial markers can be quantified in terms of a fiducial registration error (FRE). Statistical process control charts for individual patients can be designed to identify potentially significant deviation of FRE from expected behaviour. In this study, the aim was to retrospectively apply statistical process control methods to FREs in 34 individuals to identify parameters that may impact on the process stability in image-based localization. A robust procedure for estimating control parameters, control lim its and fixed tolerance levels from a small number of initial observations has been proposed and discussed. Four distinct types of qualitative control chart behavior have been observed. Probable clinical factors leading to IORT process instability are discussed in light of the control chart behaviour. Control charts have been shown to be a useful decision-making tool for detecting potentially out-of control processes on an individual basis. It can sensitively identify potential problems that warrant more detailed investigation in the 10RT of prostate cancer.

  8. Design of robust adaptive controller and feedback error learning for rehabilitation in Parkinson's disease: a simulation study.

    Science.gov (United States)

    Rouhollahi, Korosh; Emadi Andani, Mehran; Karbassi, Seyed Mahdi; Izadi, Iman

    2017-02-01

    Deep brain stimulation (DBS) is an efficient therapy to control movement disorders of Parkinson's tremor. Stimulation of one area of basal ganglia (BG) by DBS with no feedback is the prevalent opinion. Reduction of additional stimulatory signal delivered to the brain is the advantage of using feedback. This results in reduction of side effects caused by the excessive stimulation intensity. In fact, the stimulatory intensity of controllers is decreased proportional to reduction of hand tremor. The objective of this study is to design a new controller structure to decrease three indicators: (i) the hand tremor; (ii) the level of delivered stimulation in disease condition; and (iii) the ratio of the level of delivered stimulation in health condition to disease condition. For this purpose, the authors offer a new closed-loop control structure to stimulate two areas of BG simultaneously. One area (STN: subthalamic nucleus) is stimulated by an adaptive controller with feedback error learning. The other area (GPi: globus pallidus internal) is stimulated by a partial state feedback (PSF) controller. Considering the three indicators, the results show that, stimulating two areas simultaneously leads to better performance compared with stimulating one area only. It is shown that both PSF and adaptive controllers are robust regarding system parameter uncertainties. In addition, a method is proposed to update the parameters of the BG model in real time. As a result, the parameters of the controllers can be updated based on the new parameters of the BG model.

  9. Multi-Stage Optimization-Based Automatic Voltage Control Systems Considering Wind Power Forecasting Errors

    DEFF Research Database (Denmark)

    Qin, Nan; Bak, Claus Leth; Abildgaard, Hans

    2017-01-01

    This paper proposes an automatic voltage control (AVC) system for power systems with limited continuous voltage control capability. The objective is to minimize the operational cost over a period, which consists of the power loss in the grid, the shunt switching cost, the transformer tap change...... electricity control center, where study cases based on the western Danish power system demonstrate the superiority of the proposed AVC system in term of the cost minimization. Monte Carlo simulations are carried out to verify the proposed method on the robustness improvements....

  10. Analysis of Steady-State Error in Torque Current Component Control of PMSM Drive

    Directory of Open Access Journals (Sweden)

    BRANDSTETTER, P.

    2017-05-01

    Full Text Available The paper presents dynamic properties of a vector controlled permanent magnet synchronous motor drive supplied by a voltage source inverter. The paper deals with a control loop for the torque producing stator current. There is shown fundamental mathematical description for the vector control structure of the permanent magnet synchronous motor drive with respect to the current control for d-axis and q-axis of the rotor rotating coordinate system. The derivations of steady-state deviation for schemes with and without decoupling circuits are described for q-axis. The properties of both schemes are verified by MATLAB-SIMULINK program considering a lower and a higher value of inertia and by experimental measurements in our laboratory. The simulation and experimental results are presented and discussed at the end of the paper.

  11. Synchronization Control for a Class of Discrete-Time Dynamical Networks With Packet Dropouts: A Coding-Decoding-Based Approach.

    Science.gov (United States)

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2017-09-06

    The synchronization control problem is investigated for a class of discrete-time dynamical networks with packet dropouts via a coding-decoding-based approach. The data is transmitted through digital communication channels and only the sequence of finite coded signals is sent to the controller. A series of mutually independent Bernoulli distributed random variables is utilized to model the packet dropout phenomenon occurring in the transmissions of coded signals. The purpose of the addressed synchronization control problem is to design a suitable coding-decoding procedure for each node, based on which an efficient decoder-based control protocol is developed to guarantee that the closed-loop network achieves the desired synchronization performance. By applying a modified uniform quantization approach and the Kronecker product technique, criteria for ensuring the detectability of the dynamical network are established by means of the size of the coding alphabet, the coding period and the probability information of packet dropouts. Subsequently, by resorting to the input-to-state stability theory, the desired controller parameter is obtained in terms of the solutions to a certain set of inequality constraints which can be solved effectively via available software packages. Finally, two simulation examples are provided to demonstrate the effectiveness of the obtained results.

  12. Fusimotor control of spindle sensitivity regulates central and peripheral coding of joint angles.

    Science.gov (United States)

    Lan, Ning; He, Xin

    2012-01-01

    Proprioceptive afferents from muscle spindles encode information about peripheral joint movements for the central nervous system (CNS). The sensitivity of muscle spindle is nonlinearly dependent on the activation of gamma (γ) motoneurons in the spinal cord that receives inputs from the motor cortex. How fusimotor control of spindle sensitivity affects proprioceptive coding of joint position is not clear. Furthermore, what information is carried in the fusimotor signal from the motor cortex to the muscle spindle is largely unknown. In this study, we addressed the issue of communication between the central and peripheral sensorimotor systems using a computational approach based on the virtual arm (VA) model. In simulation experiments within the operational range of joint movements, the gamma static commands (γ(s)) to the spindles of both mono-articular and bi-articular muscles were hypothesized (1) to remain constant, (2) to be modulated with joint angles linearly, and (3) to be modulated with joint angles nonlinearly. Simulation results revealed a nonlinear landscape of Ia afferent with respect to both γ(s) activation and joint angle. Among the three hypotheses, the constant and linear strategies did not yield Ia responses that matched the experimental data, and therefore, were rejected as plausible strategies of spindle sensitivity control. However, if γ(s) commands were quadratically modulated with joint angles, a robust linear relation between Ia afferents and joint angles could be obtained in both mono-articular and bi-articular muscles. With the quadratic strategy of spindle sensitivity control, γ(s) commands may serve as the CNS outputs that inform the periphery of central coding of joint angles. The results suggest that the information of joint angles may be communicated between the CNS and muscles via the descending γ(s) efferent and Ia afferent signals.

  13. Control charts for identifying systematic errors using control sera to detect antibody to Salmonella in an indirect ELISA

    DEFF Research Database (Denmark)

    Bak, H.; Barfod, Kristen

    2008-01-01

    This study evaluated the preparation of Shewhart's control charts using the concept of rational subgroups for monitoring the Salmonella antibody ELISA used for surveillance of Danish pig herds. Control charts were prepared for a buffer control sample, a negative serum sample and a positive serum...

  14. The bioelectric code: An ancient computational medium for dynamic control of growth and form.

    Science.gov (United States)

    Levin, Michael; Martyniuk, Christopher J

    2018-02-01

    What determines large-scale anatomy? DNA does not directly specify geometrical arrangements of tissues and organs, and a process of encoding and decoding for morphogenesis is required. Moreover, many species can regenerate and remodel their structure despite drastic injury. The ability to obtain the correct target morphology from a diversity of initial conditions reveals that the morphogenetic code implements a rich system of pattern-homeostatic processes. Here, we describe an important mechanism by which cellular networks implement pattern regulation and plasticity: bioelectricity. All cells, not only nerves and muscles, produce and sense electrical signals; in vivo, these processes form bioelectric circuits that harness individual cell behaviors toward specific anatomical endpoints. We review emerging progress in reading and re-writing anatomical information encoded in bioelectrical states, and discuss the approaches to this problem from the perspectives of information theory, dynamical systems, and computational neuroscience. Cracking the bioelectric code will enable much-improved control over biological patterning, advancing basic evolutionary developmental biology as well as enabling numerous applications in regenerative medicine and synthetic bioengineering. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Temporal coding of brain patterns for direct limb control in humans

    Directory of Open Access Journals (Sweden)

    Gernot Mueller-Putz

    2010-06-01

    Full Text Available For individuals with a high spinal cord injury (SCI not only the lower limbs, but also the upper extremities are paralyzed. A neuroprosthesis can be used to restore the lost hand and arm function in those tetraplegics. The main problem for this group of individuals, however, is the reduced ability to voluntarily operate device controllers. A Brain-Computer Interface provides a non-manual alternative to conventional input devices by translating brain activity patterns into control commands. We show that the temporal coding of individual mental imagery pattern can be used to control two independent degrees of freedom – grasp and elbow function - of an artificial robotic arm by utilizing a minimum number of EEG scalp electrodes. We describe the procedure from the initial screening to the final application. From eight naïve subjects participating on-line feedback experiments, four were able to voluntarily control an artificial arm by inducing one motor imagery pattern derived from one EEG derivation only.

  16. Failing to Forget: Prospective Memory Commission Errors Can Result from Spontaneous Retrieval and Impaired Executive Control

    Science.gov (United States)

    Scullin, Michael K.; Bugg, Julie M.

    2013-01-01

    Prospective memory (PM) research typically examines the ability to remember to execute delayed intentions but often ignores the ability to forget finished intentions. We had participants perform (or not perform; control group) a PM task and then instructed them that the PM task was finished. We later (re)presented the PM cue. Approximately 25% of…

  17. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  18. Sharing is caring, but not error free: transparency of granular controls for sharing personal health information in social networks.

    Science.gov (United States)

    Hartzler, Andrea; Skeels, Meredith M; Mukai, Marlee; Powell, Christopher; Klasnja, Predrag; Pratt, Wanda

    2011-01-01

    When patients share personal health information with family and friends, their social networks become better equipped to help them through serious health situations. Thus, patients need tools that enable granular control over what personal health information is shared and with whom within social networks. Yet, we know little about how well such tools support patients' complex sharing needs. We report on a lab study in which we examined the transparency of sharing interfaces that display an overview and details of information sharing with network connections in an internet-based personal health information management tool called HealthWeaver. Although participants found the interfaces easy to use and were highly confident in their interpretation of the sharing controls, several participants made errors in determining what information was shared with whom. Our findings point to the critical importance of future work that examines design of usable interfaces that offer transparent granularity in support of patients' complex information sharing practices.

  19. On the Need of Novel Medium Access Control Schemes for Network Coding enabled Wireless Mesh Networks

    DEFF Research Database (Denmark)

    Paramanathan, Achuthan; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani

    2013-01-01

    that network coding will improve the throughput in such systems, but our novel medium access scheme improves the performance in the cross topology by another 66 % for network coding and 150 % for classical forwarding in theory. These gains translate in a theoretical gain of 33 % of network coding over...

  20. Control rod drop transient analysis with the coupled parallel code pCTF-PARCSv2.7

    International Nuclear Information System (INIS)

    Ramos, Enrique; Roman, Jose E.; Abarca, Agustín; Miró, Rafael; Bermejo, Juan A.

    2016-01-01

    Highlights: • An MPI parallel version of the thermal–hydraulic subchannel code COBRA-TF has been developed. • The parallel code has been coupled to the 3D neutron diffusion code PARCSv2.7. • The new codes are validated with a control rod drop transient. - Abstract: In order to reduce the response time when simulating large reactors in detail, a parallel version of the thermal–hydraulic subchannel code COBRA-TF (CTF) has been developed using the standard Message Passing Interface (MPI). The parallelization is oriented to reactor cells, so it is best suited for models consisting of many cells. The generation of the Jacobian matrix is parallelized, in such a way that each processor is in charge of generating the data associated with a subset of cells. Also, the solution of the linear system of equations is done in parallel, using the PETSc toolkit. With the goal of creating a powerful tool to simulate the reactor core behavior during asymmetrical transients, the 3D neutron diffusion code PARCSv2.7 (PARCS) has been coupled with the parallel version of CTF (pCTF) using the Parallel Virtual Machine (PVM) technology. In order to validate the correctness of the parallel coupled code, a control rod drop transient has been simulated comparing the results with the real experimental measures acquired during an NPP real test.

  1. Coding and decoding for code division multiple user communication systems

    Science.gov (United States)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  2. Completion time reduction in instantly decodable network coding through decoding delay control

    KAUST Repository

    Douik, Ahmed S.

    2014-12-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users\\' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.

  3. Completion time reduction in instantly decodable network coding through decoding delay control

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Alouini, Mohamed-Slim; Al-Naffouri, Tareq Y.

    2014-01-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.

  4. Neurometaplasticity: Glucoallostasis control of plasticity of the neural networks of error commission, detection, and correction modulates neuroplasticity to influence task precision

    Science.gov (United States)

    Welcome, Menizibeya O.; Dane, Şenol; Mastorakis, Nikos E.; Pereverzev, Vladimir A.

    2017-12-01

    The term "metaplasticity" is a recent one, which means plasticity of synaptic plasticity. Correspondingly, neurometaplasticity simply means plasticity of neuroplasticity, indicating that a previous plastic event determines the current plasticity of neurons. Emerging studies suggest that neurometaplasticity underlie many neural activities and neurobehavioral disorders. In our previous work, we indicated that glucoallostasis is essential for the control of plasticity of the neural network that control error commission, detection and correction. Here we review recent works, which suggest that task precision depends on the modulatory effects of neuroplasticity on the neural networks of error commission, detection, and correction. Furthermore, we discuss neurometaplasticity and its role in error commission, detection, and correction.

  5. Automatic parameter estimation of multicompartmental neuron models via minimization of trace error with control adjustment.

    Science.gov (United States)

    Brookings, Ted; Goeritz, Marie L; Marder, Eve

    2014-11-01

    We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. Copyright © 2014 the American Physiological Society.

  6. Achievable Performance of Zero-Delay Variable-Rate Coding in Rate-Constrained Networked Control Systems with Channel Delay

    DEFF Research Database (Denmark)

    Barforooshan, Mohsen; Østergaard, Jan; Stavrou, Fotios

    2017-01-01

    This paper presents an upper bound on the minimum data rate required to achieve a prescribed closed-loop performance level in networked control systems (NCSs). The considered feedback loop includes a linear time-invariant (LTI) plant with single measurement output and single control input. Moreover......, in this NCS, a causal but otherwise unconstrained feedback system carries out zero-delay variable-rate coding, and control. Between the encoder and decoder, data is exchanged over a rate-limited noiseless digital channel with a known constant time delay. Here we propose a linear source-coding scheme...

  7. DSOGI-PLL Based Power Control Method to Mitigate Control Errors Under Disturbances of Grid Connected Hybrid Renewable Power Systems

    Directory of Open Access Journals (Sweden)

    Mehmet Emin Meral

    2018-01-01

    Full Text Available The control of power converter devices is one of the main research lines in interfaced renewable energy sources, such as solar cells and wind turbines. Therefore, suitable control algorithms should be designed in order to regulate power or current properly and attain a good power quality for some disturbances, such as voltage sag/swell, voltage unbalances and fluctuations, long interruptions, and harmonics. Various synchronisation techniques based control strategies are implemented for the hybrid power system applications under unbalanced conditions in literature studies. In this paper, synchronisation algorithms based Proportional-Resonant (PR power/current controller is applied to the hybrid power system (solar cell + wind turbine + grid, and Dual Second Order Generalized Integrator-Phase Locked Loop (DSOGI-PLL based PR controller in stationary reference frame provides a solution to overcome these problems. The influence of various cases, such as unbalance, and harmonic conditions, is examined, analysed and compared to the PR controllers based on DSOGI-PLL and SRF-PLL. The results verify the effectiveness and correctness of the proposed DSOGI-PLL based power control method.

  8. Impact of Glucose Meter Error on Glycemic Variability and Time in Target Range During Glycemic Control After Cardiovascular Surgery.

    Science.gov (United States)

    Karon, Brad S; Meeusen, Jeffrey W; Bryant, Sandra C

    2015-08-25

    We retrospectively studied the impact of glucose meter error on the efficacy of glycemic control after cardiovascular surgery. Adult patients undergoing intravenous insulin glycemic control therapy after cardiovascular surgery, with 12-24 consecutive glucose meter measurements used to make insulin dosing decisions, had glucose values analyzed to determine glycemic variability by both standard deviation (SD) and continuous overall net glycemic action (CONGA), and percentage glucose values in target glucose range (110-150 mg/dL). Information was recorded for 70 patients during each of 2 periods, with different glucose meters used to measure glucose and dose insulin during each period but no other changes to the glycemic control protocol. Accuracy and precision of each meter were also compared using whole blood specimens from ICU patients. Glucose meter 1 (GM1) had median bias of 11 mg/dL compared to a laboratory reference method, while glucose meter 2 (GM2) had a median bias of 1 mg/dL. GM1 and GM2 differed little in precision (CV = 2.0% and 2.7%, respectively). Compared to the period when GM1 was used to make insulin dosing decisions, patients whose insulin dose was managed by GM2 demonstrated reduced glycemic variability as measured by both SD (13.7 vs 21.6 mg/dL, P meter error (bias) was associated with decreased glycemic variability and increased percentage of values in target glucose range for patients placed on intravenous insulin therapy following cardiovascular surgery. © 2015 Diabetes Technology Society.

  9. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    Directory of Open Access Journals (Sweden)

    Parisot Christophe

    2003-01-01

    Full Text Available Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.

  10. ANALYSIS OF EXISTING AND PROSPECTIVE TECHNICAL CONTROL SYSTEMS OF NUMERIC CODES AUTOMATIC BLOCKING

    Directory of Open Access Journals (Sweden)

    A. M. Beznarytnyy

    2013-09-01

    Full Text Available Purpose. To identify the characteristic features of the engineering control measures system of automatic block of numeric code, identifying their advantages and disadvantages, to analyze the possibility of their use in the problems of diagnosing status of the devices automatic block and setting targets for the development of new diagnostic systems. Methodology. In order to achieve targets the objective theoretical and analytical method and the method of functional analysis have been used. Findings. The analysis of existing and future facilities of the remote control and diagnostics automatic block devices had shown that the existing systems of diagnosis were not sufficiently informative, designed primarily to control the discrete parameters, which in turn did not allow them to construct a decision support subsystem. In developing of new systems of technical diagnostics it was proposed to use the principle of centralized distributed processing of diagnostic data, to include a subsystem support decision-making in to the diagnostics system, it will reduce the amount of work to maintain the devices blocking and reduce recovery time after the occurrence injury. Originality. As a result, the currently existing engineering controls facilities of automatic block can not provide a full assessment of the state distillation alarms and locks. Criteria for the development of new systems of technical diagnostics with increasing amounts of diagnostic information and its automatic analysis were proposed. Practical value. These results of the analysis can be used in practice in order to select the technical control of automatic block devices, as well as the further development of diagnostic systems automatic block that allows for a gradual transition from a planned preventive maintenance service model to the actual state of the monitored devices.

  11. Berkson error adjustment and other exposure surrogates in occupational case-control studies, with application to the Canadian INTEROCC study.

    Science.gov (United States)

    Oraby, Tamer; Sivaganesan, Siva; Bowman, Joseph D; Kincl, Laurel; Richardson, Lesley; McBride, Mary; Siemiatycki, Jack; Cardis, Elisabeth; Krewski, Daniel

    2018-05-01

    Many epidemiological studies assessing the relationship between exposure and disease are carried out without data on individual exposures. When this barrier is encountered in occupational studies, the subject exposures are often evaluated with a job-exposure matrix (JEM), which consists of mean exposure for occupational categories measured on a comparable group of workers. One of the objectives of the seven-country case-control study of occupational exposure and brain cancer risk, INTEROCC, was to investigate the relationship of occupational exposure to electromagnetic fields (EMF) in different frequency ranges and brain cancer risk. In this paper, we use the Canadian data from INTEROCC to estimate the odds of developing brain tumours due to occupational exposure to EMF. The first step was to find the best EMF exposure surrogate among the arithmetic mean, the geometric mean, and the mean of log-normal exposure distribution for each occupation in the JEM, in comparison to Berkson error adjustments via numerical approximation of the likelihood function. Contrary to previous studies of Berkson errors in JEMs, we found that the geometric mean was the best exposure surrogate. This analysis provided no evidence that cumulative lifetime exposure to extremely low frequency magnetic fields increases brain cancer risk, a finding consistent with other recent epidemiological studies.

  12. A Study on Large Display Panel Design for the Countermeasures against Team Errors within the Main Control Room of APR-1400

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Lee, Yong Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The personal aspect of human errors has been mainly overcome by virtue of the education and training. However, in the system aspect, the education and training system needs to be reconsidered for more effective reduction of human errors affected from various systems hazards. Traditionally the education and training systems are mainly not focused on team skills such as communication, situational awareness, and coordination, etc. but individual knowledge, skill, and attitude. However, the team factor is one of the crucial issues to reduce the human errors in most industries. In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Team error is one of the typical organizational errors that may occur during performing operations in nuclear power plants. The large display panel is a representative feature of digitalized control room. As a group-view display, the large display panel provides plant overview to the operators. However, in terms of team performance and team errors, the large display panel is on a discussion board still because the large display panel was designed just a concept of passive display. In this study, we will propose revised large display panel which is integrated with several alternative interfaces against feasible team errors.

  13. A Study on Large Display Panel Design for the Countermeasures against Team Errors within the Main Control Room of APR-1400

    International Nuclear Information System (INIS)

    Kim, Sa Kil; Lee, Yong Hee

    2015-01-01

    The personal aspect of human errors has been mainly overcome by virtue of the education and training. However, in the system aspect, the education and training system needs to be reconsidered for more effective reduction of human errors affected from various systems hazards. Traditionally the education and training systems are mainly not focused on team skills such as communication, situational awareness, and coordination, etc. but individual knowledge, skill, and attitude. However, the team factor is one of the crucial issues to reduce the human errors in most industries. In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Team error is one of the typical organizational errors that may occur during performing operations in nuclear power plants. The large display panel is a representative feature of digitalized control room. As a group-view display, the large display panel provides plant overview to the operators. However, in terms of team performance and team errors, the large display panel is on a discussion board still because the large display panel was designed just a concept of passive display. In this study, we will propose revised large display panel which is integrated with several alternative interfaces against feasible team errors

  14. Simulation and verification of the EBR-II automatic control rod drive system with continuous system modeling codes

    International Nuclear Information System (INIS)

    Larson, H.A.; Dean, E.M.

    1985-01-01

    The two computer programs are successful in modeling the EBR-II ACRDS. In fact, this is very convenient for a presampling of the consequences of a desired power movement. The ACRDS is to be modified so that the error signal is a comparison between demand position and measured position. Purpose of this change is to permit pseudo-random binary types of reactivity transfer function experiments at EBR-II. Questions asked about the computer software and hardware to accommodate this change can be quickly answered with either of the verified codes discussed here

  15. Coherence protection by random coding

    International Nuclear Information System (INIS)

    Brion, E; Akulin, V M; Dumer, I; Harel, G; Kurizki, G

    2005-01-01

    We show that the multidimensional Zeno effect combined with non-holonomic control allows one to efficiently protect quantum systems from decoherence by a method similar to classical random coding. The method is applicable to arbitrary error-inducing Hamiltonians and general quantum systems. The quantum encoding approaches the Hamming upper bound for large dimension increases. Applicability of the method is demonstrated with a seven-qubit toy computer

  16. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    Science.gov (United States)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  17. (Almost) practical tree codes

    KAUST Repository

    Khina, Anatoly

    2016-08-15

    We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.

  18. Project of decree relative to the licensing and statement system of nuclear activities and to their control and bearing various modifications of the public health code and working code

    International Nuclear Information System (INIS)

    2005-01-01

    This decree concerns the control of high level sealed radioactive sources and orphan sources. It has for objective to introduce administrative simplification, especially the radiation sources licensing and statement system, to reinforce the control measures planed by the public health code and by the employment code, to bring precision and complements in the editing of several already existing arrangements. (N.C.)

  19. Improved read disturb and write error rates in voltage-control spintronics memory (VoCSM) by controlling energy barrier height

    Science.gov (United States)

    Inokuchi, T.; Yoda, H.; Kato, Y.; Shimizu, M.; Shirotori, S.; Shimomura, N.; Koi, K.; Kamiguchi, Y.; Sugiyama, H.; Oikawa, S.; Ikegami, K.; Ishikawa, M.; Altansargai, B.; Tiwari, A.; Ohsawa, Y.; Saito, Y.; Kurobe, A.

    2017-06-01

    A hybrid writing scheme that combines the spin Hall effect and voltage-controlled magnetic-anisotropy effect is investigated in Ta/CoFeB/MgO/CoFeB/Ru/CoFe/IrMn junctions. The write current and control voltage are applied to Ta and CoFeB/MgO/CoFeB junctions, respectively. The critical current density required for switching the magnetization in CoFeB was modulated 3.6-fold by changing the control voltage from -1.0 V to +1.0 V. This modulation of the write current density is explained by the change in the surface anisotropy of the free layer from 1.7 mJ/m2 to 1.6 mJ/m2, which is caused by the electric field applied to the junction. The read disturb rate and write error rate, which are important performance parameters for memory applications, are drastically improved, and no error was detected in 5 × 108 cycles by controlling read and write sequences.

  20. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  1. Comparison benchmark between tokamak simulation code and TokSys for Chinese Fusion Engineering Test Reactor vertical displacement control design

    International Nuclear Information System (INIS)

    Qiu Qing-Lai; Xiao Bing-Jia; Guo Yong; Liu Lei; Wang Yue-Hang

    2017-01-01

    Vertical displacement event (VDE) is a big challenge to the existing tokamak equipment and that being designed. As a Chinese next-step tokamak, the Chinese Fusion Engineering Test Reactor (CFETR) has to pay attention to the VDE study with full-fledged numerical codes during its conceptual design. The tokamak simulation code (TSC) is a free boundary time-dependent axisymmetric tokamak simulation code developed in PPPL, which advances the MHD equations describing the evolution of the plasma in a rectangular domain. The electromagnetic interactions between the surrounding conductor circuits and the plasma are solved self-consistently. The TokSys code is a generic modeling and simulation environment developed in GA. Its RZIP model treats the plasma as a fixed spatial distribution of currents which couple with the surrounding conductors through circuit equations. Both codes have been individually used for the VDE study on many tokamak devices, such as JT-60U, EAST, NSTX, DIII-D, and ITER. Considering the model differences, benchmark work is needed to answer whether they reproduce each other’s results correctly. In this paper, the TSC and TokSys codes are used for analyzing the CFETR vertical instability passive and active controls design simultaneously. It is shown that with the same inputs, the results from these two codes conform with each other. (paper)

  2. Open quantum systems and error correction

    Science.gov (United States)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  3. A Distributed Flow Rate Control Algorithm for Networked Agent System with Multiple Coding Rates to Optimize Multimedia Data Transmission

    Directory of Open Access Journals (Sweden)

    Shuai Zeng

    2013-01-01

    Full Text Available With the development of wireless technologies, mobile communication applies more and more extensively in the various walks of life. The social network of both fixed and mobile users can be seen as networked agent system. At present, kinds of devices and access network technology are widely used. Different users in this networked agent system may need different coding rates multimedia data due to their heterogeneous demand. This paper proposes a distributed flow rate control algorithm to optimize multimedia data transmission of the networked agent system with the coexisting various coding rates. In this proposed algorithm, transmission path and upload bandwidth of different coding rate data between source node, fixed and mobile nodes are appropriately arranged and controlled. On the one hand, this algorithm can provide user nodes with differentiated coding rate data and corresponding flow rate. On the other hand, it makes the different coding rate data and user nodes networked, which realizes the sharing of upload bandwidth of user nodes which require different coding rate data. The study conducts mathematical modeling on the proposed algorithm and compares the system that adopts the proposed algorithm with the existing system based on the simulation experiment and mathematical analysis. The results show that the system that adopts the proposed algorithm achieves higher upload bandwidth utilization of user nodes and lower upload bandwidth consumption of source node.

  4. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  5. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  6. Decoding Delay Controlled Completion Time Reduction in Instantly Decodable Network Coding

    KAUST Repository

    Douik, Ahmed

    2016-06-27

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to act completely against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best-known solution in both perfect and imperfect feedback with persistent erasure channels. To solve the problem, the decodingdelay- dependent expressions of the users’ and overall completion times are derived in the complete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, the paper proposes two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Afterward, the paper extends the study to the imperfect feedback scenario in which uncertainties at the sender affects its ability to anticipate accurately the decoding delay increase at each user. The paper formulates the problem in such environment and derives the expression of the minimum increase in the completion time. Simulation results show the performance of the proposed solutions and suggest that both heuristics achieves a lower mean completion time as compared to the best-known heuristics for the completion time reduction in perfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.

  7. Decoding Delay Controlled Completion Time Reduction in Instantly Decodable Network Coding

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to act completely against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best-known solution in both perfect and imperfect feedback with persistent erasure channels. To solve the problem, the decodingdelay- dependent expressions of the users’ and overall completion times are derived in the complete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, the paper proposes two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Afterward, the paper extends the study to the imperfect feedback scenario in which uncertainties at the sender affects its ability to anticipate accurately the decoding delay increase at each user. The paper formulates the problem in such environment and derives the expression of the minimum increase in the completion time. Simulation results show the performance of the proposed solutions and suggest that both heuristics achieves a lower mean completion time as compared to the best-known heuristics for the completion time reduction in perfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.

  8. Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks.

    Science.gov (United States)

    Dai, Lengshi; Shinn-Cunningham, Barbara G

    2016-01-01

    Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics

  9. Contributions of sensory coding and attentional control to individual differences in performance in spatial auditory selective attention tasks

    Directory of Open Access Journals (Sweden)

    Lengshi Dai

    2016-10-01

    Full Text Available Listeners with normal hearing thresholds differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding, onset event-related potentials from the scalp (ERPs, reflecting cortical responses to sound, and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones; however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance, inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with normal hearing thresholds can arise due to both subcortical coding differences and differences in attentional control, depending on

  10. Optical 'dampening' of the refractive error to axial length ratio: implications for outcome measures in myopia control studies.

    Science.gov (United States)

    Cruickshank, Fiona E; Logan, Nicola S

    2018-05-01

    To gauge the extent to which differences in the refractive error axial length relationship predicted by geometrical optics are observed in actual refractive/biometric data. This study is a retrospective analysis of existing data. Right eye refractive error [RX] and axial length [AXL] data were collected on 343 6-to-7-year-old children [mean 7.18 years (S.D. 0.35)], 294 12-to-13-year-old children [mean 13.12 years (S.D. 0.32)] and 123 young adults aged 18-to-25-years [mean 20.56 years (S.D. 1.91)]. Distance RX was measured with the Shin-Nippon NVision-K 5001 infrared open-field autorefractor. Child participants were cyclopleged prior to data collection (1% Cyclopentolate Hydrochloride). Myopia was defined as a mean spherical equivalent [MSE] ≤-0.50 D. Axial length was measured using the Zeiss IOLMaster 500. Optical modelling was based on ray tracing and manipulation of parameters of a Gullstrand reduced model eye. There was a myopic shift in mean MSE with age (6-7 years +0.87 D, 12-13 years -0.06 D and 18-25 years -1.41 D), associated with an increase in mean AXL (6-7 years 22.70 mm, 12-13 years 23.49 mm and 18-25 years 23.98 mm). There was a significant negative correlation between MSE and AXL for all age groups (all p theory predicts that there will be a reduction in the RX: AXL ratio with longer eyes. The participant data although adhering to this theory show a reduced effect, with eyes with longer axial lengths having a lower refractive error to axial length ratio than predicted by model eye calculations. We propose that in myopia control intervention studies when comparing efficacy, consideration should be given to the dampening effect seen with a longer eye. © 2018 The Authors. Ophthalmic and Physiological Optics published by John Wiley & Sons Ltd on behalf of College of Optometrists.

  11. Discrete neurochemical coding of distinguishable motivational processes: insights from nucleus accumbens control of feeding.

    Science.gov (United States)

    Baldo, Brian A; Kelley, Ann E

    2007-04-01

    The idea that nucleus accumbens (Acb) dopamine transmission contributes to the neural mediation of reward, at least in a general sense, has achieved wide acceptance. Nevertheless, debate remains over the precise nature of dopamine's role in reward and even over the nature of reward itself. In the present article, evidence is reviewed from studies of food intake, feeding microstructure, instrumental responding for food reinforcement, and dopamine efflux associated with feeding, which suggests that reward processing in the Acb is best understood as an interaction among distinct processes coded by discrete neurotransmitter systems. In agreement with several theories of Acb dopamine function, it is proposed here that allocation of motor effort in seeking food or food-associated conditioned stimuli can be dissociated from computations relevant to the hedonic evaluation of food during the consummatory act. The former appears to depend upon Acb dopamine transmission and the latter upon striatal opioid peptide release. Moreover, dopamine transmission may play a role in 'stamping in' associations between motor acts and goal attainment and perhaps also neural representations corresponding to rewarding outcomes. Finally, evidence is reviewed that amino acid transmission specifically in the Acb shell acts as a central 'circuit breaker' to flexibly enable or terminate the consummatory act, via descending connections to hypothalamic feeding control systems. The heuristic framework outlined above may help explain why dopamine-compromising manipulations that strongly diminish instrumental goal-seeking behaviors leave consummatory activity relatively unaffected.

  12. Russian Army Mat as a Code System Controlling Behaviour in the Russian army

    Directory of Open Access Journals (Sweden)

    Vadim Mikhailin

    2004-09-01

    Full Text Available This text is to be a shortened, restructured and based on somewhat another factological foundation version of my article “Russkii mat kak muzhskoi obstsennyi kod: problema proiskhozhdeniia i evoliutsiia statusa”, published in # 43 of Novoe Literaturnoe Obozrenie. Tracing the genesis of mat to the specific modes of behaviour, peculiar to the archaic male warrior bands, I’m going to show that the military milieu (and some other, structurally close to it social strata, has always been – and remain – absolutely adequate for the mat speaking. Moreover, mat has always carried on within these strata rather specific function connected with creating of one’s identity as a military, and its use offers various and sometimes the only possible means of impact at one’s equal or subordinate (or even superior. As a matter of fact, mat is a basis for a whole code system, controlling different military behaviour practices. The problems of the freshers’ adaptation and of the national specificities in the late Soviet and modern Russian army are to be considered with special respect.

  13. Using game theory for perceptual tuned rate control algorithm in video coding

    Science.gov (United States)

    Luo, Jiancong; Ahmad, Ishfaq

    2005-03-01

    This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.

  14. RETRAN code analysis of Tsuruga-2 plant chemical volume control system (CVCS) reactor coolant leakage incident

    International Nuclear Information System (INIS)

    Kawai, Hiroshi

    2002-01-01

    In the Chemical Volume Control System (CVCS) reactor primary coolant leakage incident, which occurred in Tsuruga-2 (4-loop PWR, 3,423 MWt, 1,160 MWe) on July 12, 1999, it took about 14 hours before the leakage isolation. The delayed leakage isolation and a large amount of leakage have become a social concern. Effective procedure modification was studied. Three betterments were proposed based on a qualitative analysis to reduce the pressure and temperature of the primary loop as fast as possible by the current plant facilities while maintaining enough subcooling of the primary loop. I analyzed the incident with RETRAN code in order to quantitatively evaluate the leakage reduction when these betterments are adopted. This paper is very new because it created a typical analysis method for PWR plant behavior during plant shutdown procedure which conventional RETRAN transient analyses rarely dealt with. Also the event time is very long. To carry out this analysis successfully, I devised new models such as an Residual Heat Removal System (RHR) model etc. and simplified parts of the conventional model. Based on the analysis results, I confirmed that leakage can be reduced by about 30% by adopting these betterments. Then the Japan Atomic Power Company (JAPC) modified the operational procedure for reactor primary coolant leakage events adopting these betterments. (author)

  15. A positional code and anisotropic forces control tissue remodeling in Drosophila

    Science.gov (United States)

    Zallen, Jennifer

    A major challenge in developmental biology is to understand how tissue-scale changes in organism structure arise from events that occur on a cellular and molecular level. We are using cell biological, biophysical, and quantitative live-embryo imaging approaches to understand how genes encode the forces that shape tissues, and to identify the mechanisms that modulate cell behavior in response to local forces. In many animals, the elongated head-to-tail body axis is achieved by rapid and coordinated movements of hundreds of cells. We found that in the fruit fly, these cell movements are regulated by subcellular asymmetries in the localization of proteins that generate contractile and adhesive forces between cells. Asymmetries in the force-generating machinery are in turn controlled by a positional code of spatial information provided by an ancient family of Toll-related receptors that are widely used for pathogen recognition by the innate immune system. I will describe how this spatial system systematically orients local cell movements and collective rosette-like clusters in the Drosophila embryo. Rosettes have now also been shown to shape the body axis in chicks, frogs, and mice, demonstrating that rosette behaviors are a general mechanism linking cellular asymmetry to tissue reorganization.

  16. Rod behaviour under base load, load follow and frequency control operation: CYRANO 2 code predictions versus experimental results

    International Nuclear Information System (INIS)

    Gautier, B.; Raybaud, A.

    1984-01-01

    The French PWR reactors are now currently operating under load follow and frequency control. In order to demonstrate that these operating conditions were not able to increase the fuel failure rate, fuel rod behaviour calculations have been performed by E.D.F. with CYRANO 2 code. In parallel with these theoretical calculations, code predictions have been compared to experimental results. The paper presents some of the comparisons performed on 17x17 fuel irradiated in FESSENHEIM 2 up to 30 GWd/tU under base load operation and in the CAP reactor under load follow and frequency control conditions. It is shown that experimental results can be predicted with a reasonable accuracy by CYRANO 2 code. The experimental work was carried out under joint R and D programs by EDF, FRAGEMA, CEA, and WESTINGHOUSE (CAP program by French partners only). (author)

  17. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  18. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  19. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  20. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    International Nuclear Information System (INIS)

    Parker, S

    2015-01-01

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignment of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors