WorldWideScience

Sample records for bit error rate

  1. Framed bit error rate testing for 100G ethernet equipment

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    of performing bit error rate testing at 100Gbps. In particular, we show how Bit Error Rate Testing (BERT) can be performed over an aggregated 100G Attachment Unit Interface (CAUI) by encapsulating the test data in Ethernet frames at line speed. Our results show that framed bit error rate testing can...... functionality besides the bit error rate tester....

  2. Modeling of Bit Error Rate in Cascaded 2R Regenerators

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper

    2006-01-01

    This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments and the rege......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...

  3. Bit Error Rate Minimizing Channel Shortening Equalizers for Single Carrier Cyclic Prefixed Systems

    National Research Council Canada - National Science Library

    Martin, Richard K; Vanbleu, Koen; Ysebaert, Geert

    2007-01-01

    .... Previous work on channel shortening has largely been in the context of digital subscriber lines, a wireline system that allows bit allocation, thus it has focused on maximizing the bit rate for a given bit error rate (BER...

  4. Frame, bit and chip error rate evaluation for a DSSS communication system

    Directory of Open Access Journals (Sweden)

    F.R. Castillo–Soria

    2008-07-01

    Full Text Available The relation between chips, bits and frames error rates in the Additive White Gaussian Noise (AWGN channel for a Direct Sequence Spread Spectrum (DSSS system, in Multiple Access Interference (MAI conditions is evaluated. A simple error–correction code (ECC for the Frame Error Rate (FER evaluation is used. 64 bits (chips Pseudo Noise (PN sequences are employed for the spread spectrum transmission.An iterative Montecarlo (stochastic simulation is used to evaluate how many errors on chips are introduced for channel effects and how they are related to the bit errors. It can be observed how the bit errors may eventually cause a frame error, i. e. CODEC or communication error. These results are useful for academics, engineers, or professionals alike.

  5. Analytical expression for the bit error rate of cascaded all-optical regenerators

    DEFF Research Database (Denmark)

    Mørk, Jesper; Öhman, Filip; Bischoff, S.

    2003-01-01

    We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed.......We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed....

  6. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  7. Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory

    Science.gov (United States)

    Yan, Daqin; Wang, Fuzhong; Wang, Shuo

    2017-12-01

    Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.

  8. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2012-05-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  9. FPGA-based Bit-Error-Rate Tester for SEU-hardened Optical Links

    CERN Document Server

    Detraz, S; Moreira, P; Papadopoulos, S; Papakonstantinou, I; Seif El Nasr, S; Sigaud, C; Soos, C; Stejskal, P; Troska, J; Versmissen, H

    2009-01-01

    The next generation of optical links for future High-Energy Physics experiments will require components qualified for use in radiation-hard environments. To cope with radiation induced single-event upsets, the physical layer protocol will include Forward Error Correction (FEC). Bit-Error-Rate (BER) testing is a widely used method to characterize digital transmission systems. In order to measure the BER with and without the proposed FEC, simultaneously on several devices, a multi-channel BER tester has been developed. This paper describes the architecture of the tester, its implementation in a Xilinx Virtex-5 FPGA device and discusses the experimental results.

  10. Linear transceiver design for nonorthogonal amplify-and-forward protocol using a bit error rate criterion

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2014-04-01

    The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol. © 2002-2012 IEEE.

  11. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  12. Inclusive bit error rate analysis for coherent optical code-division multiple-access system

    Science.gov (United States)

    Katz, Gilad; Sadot, Dan

    2002-06-01

    Inclusive noise and bit error rate (BER) analysis for optical code-division multiplexing (OCDM) using coherence techniques is presented. The analysis contains crosstalk calculation of the mutual field variance for different number of users. It is shown that the crosstalk noise depends deeply on the receiver integration time, the laser coherence time, and the number of users. In addition, analytical results of the power fluctuation at the received channel due to the data modulation at the rejected channels are presented. The analysis also includes amplified spontaneous emission (ASE)-related noise effects of in-line amplifiers in a long-distance communication link.

  13. SITE project. Phase 1: Continuous data bit-error-rate testing

    Science.gov (United States)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-09-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  14. Performance analysis for the bit-error rate of SAC-OCDMA systems

    Science.gov (United States)

    Feng, Gang; Cheng, Wenqing; Chen, Fujun

    2015-09-01

    Under low power, Gaussian statistics by invoking the central limit theorem is feasible to predict the upper bound in the spectral-amplitude-coding optical code division multiple access (SAC-OCDMA) system. However, this case severely underestimates the bit-error rate (BER) performance of the system under high power assumption. Fortunately, the exact negative binomial (NB) model is a perfect replacement for the Gaussian model in the prediction and evaluation. Based on NB statistics, a more accurate closed-form expression is analyzed and derived for the SAC-OCDMA system. The experiment shows that the obtained expression provides a more precise prediction of the BER performance under the low and high power assumptions.

  15. Bit Error Rate Performance of a MIMO-CDMA System Employing Parity-Bit-Selected Spreading in Frequency Nonselective Rayleigh Fading

    Directory of Open Access Journals (Sweden)

    Claude D'Amours

    2011-01-01

    Full Text Available We analytically derive the upper bound for the bit error rate (BER performance of a single user multiple input multiple output code division multiple access (MIMO-CDMA system employing parity-bit-selected spreading in slowly varying, flat Rayleigh fading. The analysis is done for spatially uncorrelated links. The analysis presented demonstrates that parity-bit-selected spreading provides an asymptotic gain of 10log(Nt dB over conventional MIMO-CDMA when the receiver has perfect channel estimates. This analytical result concurs with previous works where the (BER is determined by simulation methods and provides insight into why the different techniques provide improvement over conventional MIMO-CDMA systems.

  16. Time Domain Equalizer Design Using Bit Error Rate Minimization for UWB Systems

    Directory of Open Access Journals (Sweden)

    Syed Imtiaz Husain

    2009-01-01

    Full Text Available Ultra-wideband (UWB communication systems occupy huge bandwidths with very low power spectral densities. This feature makes the UWB channels highly rich in resolvable multipaths. To exploit the temporal diversity, the receiver is commonly implemented through a Rake. The aim to capture enough signal energy to maintain an acceptable output signal-to-noise ratio (SNR dictates a very complicated Rake structure with a large number of fingers. Channel shortening or time domain equalizer (TEQ can simplify the Rake receiver design by reducing the number of significant taps in the effective channel. In this paper, we first derive the bit error rate (BER of a multiuser and multipath UWB system in the presence of a TEQ at the receiver front end. This BER is then written in a form suitable for traditional optimization. We then present a TEQ design which minimizes the BER of the system to perform efficient channel shortening. The performance of the proposed algorithm is compared with some generic TEQ designs and other Rake structures in UWB channels. It is shown that the proposed algorithm maintains a lower BER along with efficiently shortening the channel.

  17. Power penalties for multi-level PAM modulation formats at arbitrary bit error rates

    Science.gov (United States)

    Kaliteevskiy, Nikolay A.; Wood, William A.; Downie, John D.; Hurley, Jason; Sterlingov, Petr

    2016-03-01

    There is considerable interest in combining multi-level pulsed amplitude modulation formats (PAM-L) and forward error correction (FEC) in next-generation, short-range optical communications links for increased capacity. In this paper we derive new formulas for the optical power penalties due to modulation format complexity relative to PAM-2 and due to inter-symbol interference (ISI). We show that these penalties depend on the required system bit-error rate (BER) and that the conventional formulas overestimate link penalties. Our corrections to the standard formulas are very small at conventional BER levels (typically 1×10-12) but become significant at the higher BER levels enabled by FEC technology, especially for signal distortions due to ISI. The standard formula for format complexity, P = 10log(L-1), is shown to overestimate the actual penalty for PAM-4 and PAM-8 by approximately 0.1 and 0.25 dB respectively at 1×10-3 BER. Then we extend the well-known PAM-2 ISI penalty estimation formula from the IEEE 802.3 standard 10G link modeling spreadsheet to the large BER case and generalize it for arbitrary PAM-L formats. To demonstrate and verify the BER dependence of the ISI penalty, a set of PAM-2 experiments and Monte-Carlo modeling simulations are reported. The experimental results and simulations confirm that the conventional formulas can significantly overestimate ISI penalties at relatively high BER levels. In the experiments, overestimates up to 2 dB are observed at 1×10-3 BER.

  18. The effect of narrow-band digital processing and bit error rate on the intelligibility of ICAO spelling alphabet words

    Science.gov (United States)

    Schmidt-Nielsen, Astrid

    1987-08-01

    The recognition of ICAO spelling alphabet words (ALFA, BRAVO, CHARLIE, etc.) is compared with diagnostic rhyme test (DRT) scores for the same conditions. The voice conditions include unprocessed speech; speech processed through the DOD standard linear-predictive-coding algorithm operating at 2400 bit/s with random error rates of 0, 2, 5, 8, and 12 percent; and speech processed through an 800-bit/s pattern-matching algorithm. The results suggest that, with distinctive vocabularies, word intelligibility can be expected to remain high even when DRT scores fall into the poor range. However, once the DRT scores fall below 75 percent, the intelligibility can be expected to fall off rapidly; at DRT scores below 50, the recognition of a distinctive vocabulary should also fall below 50 percent.

  19. PERBANDINGAN BIT ERROR RATE KODE REED-SOLOMON DENGAN KODE BOSE-CHAUDHURI-HOCQUENGHEM MENGGUNAKAN MODULASI 32-FSK

    Directory of Open Access Journals (Sweden)

    Eva Yovita Dwi Utami

    2016-11-01

    Full Text Available Kode Reed-Solomon (RS dan kode Bose-Chaudhuri-Hocquenghem (BCH merupakan kode pengoreksi error yang termasuk dalam jenis kode blok siklis. Kode pengoreksi error diperlukan pada sistem komunikasi untuk memperkecil error pada informasi yang dikirimkan. Dalam makalah ini, disajikan hasil penelitian kinerja BER sistem komunikasi yang menggunakan kode RS, kode BCH, dan sistem yang tidak menggunakan kode RS dan kode BCH, menggunakan modulasi 32-FSK pada kanal Additive White Gaussian Noise (AWGN, Rayleigh dan Rician. Kemampuan memperkecil error diukur menggunakan nilai Bit Error Rate (BER yang dihasilkan. Hasil penelitian menunjukkan bahwa kode RS seiring dengan penambahan nilai SNR, menurunkan nilai BER yang lebih curam bila dibandingkan sistem dengan kode BCH. Sedangkan kode BCH memberikan keunggulan saat SNR bernilai kecil, memiliki BER lebih baik daripada sistem dengan kode RS.

  20. Bit Error Rate Due to Misalignment of Earth Station Antenna Pointing to Satellite

    Directory of Open Access Journals (Sweden)

    Wahyu Pamungkas

    2010-04-01

    Full Text Available One problem causing reduction of energy in satellite communications system is the misalignment of earth station antenna pointing to satellite. Error in pointing would affect the quality of information signal to energy bit in earth station. In this research, error in pointing angle occurred only at receiver (Rx antenna, while the transmitter (Tx antennas precisely point to satellite. The research was conducted towards two satellites, namely TELKOM-1 and TELKOM-2. At first, measurement was made by directing Tx antenna precisely to satellite, resulting in an antenna pattern shown by spectrum analyzer. The output from spectrum analyzers is drawn with the right scale to describe swift of azimuth and elevation pointing angle towards satellite. Due to drifting from the precise pointing, it influenced the received link budget indicated by pattern antenna. This antenna pattern shows reduction of power level received as a result of pointing misalignment. As a conclusion, the increasing misalignment of pointing to satellite would affect in the reduction of received signal parameters link budget of down-link traffic.

  1. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    International Nuclear Information System (INIS)

    Chau, H.F.

    2002-01-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1√(5)≅27.6%, thereby making it the most error resistant scheme known to date

  2. Bit Error-Rate Minimizing Detector for Amplify-and-Forward Relaying Systems Using Generalized Gaussian Kernel

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-01-01

    In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.

  3. low bit rate video coding low bit rate video coding

    African Journals Online (AJOL)

    eobe

    Variable length bit rate (VLBR) ariable length bit rate (VLBR) ariable length bit rate (VLBR) broadly encompasses video coding which broadly encompasses video coding which broadly encompasses video coding which mandates a temporal frequency of 10 mandates a temporal frequency of 10 frames per frames per ...

  4. Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link

    Directory of Open Access Journals (Sweden)

    Matteo Berioli

    2007-05-01

    Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.

  5. Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link

    Directory of Open Access Journals (Sweden)

    Berioli Matteo

    2007-01-01

    Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.

  6. Bit error rate estimation for galvanic-type intra-body communication using experimental eye-diagram and jitter characteristics.

    Science.gov (United States)

    Li, Jia Wen; Chen, Xi Mei; Pun, Sio Hang; Mak, Peng Un; Gao, Yue Ming; Vai, Mang I; Du, Min

    2013-01-01

    Bit error rate (BER), which indicates the reliability of communicate channel, is one of the most important values in all kinds of communication system, including intra-body communication (IBC). In order to know more about IBC channel, this paper presents a new method of BER estimation for galvanic-type IBC using experimental eye-diagram and jitter characteristics. To lay the foundation for our methodology, the fundamental relationships between eye-diagram, jitter and BER are first reviewed. Then experiments based on human lower arm IBC are carried out using quadrature phase shift keying (QPSK) modulation scheme and 500 KHz carries frequency. In our IBC experiments, the symbol rate is from 10 Ksps to 100 Ksps, with two transmitted power settings, 0 dBm and -5 dBm. Finally, the BER results were obtained after calculation by experimental data through the relationships among eye-diagram, jitter and BER. These results are then compared with theoretical values and they show good agreement, especially when SNR is between 6 dB to 11 dB. Additionally, these results demonstrate assuming the noise of galvanic-type IBC channel as Additive White Gaussian Noise (AWGN) in previous study is applicable.

  7. A closed-form solution of the bit-error rate for optical wireless communication systems over atmospheric turbulence channels.

    Science.gov (United States)

    Dang, Anhong

    2011-02-14

    Atmospheric turbulence is a major limiting factor in an optical wireless communication (OWC) link. The turbulence distorts the phase of the propagating optical fields and limits the focusing capabilities of the telescope antennas. Hence, a detector array is required to capture the widespread signal energy in the focal-plane. This paper addresses the bit-error rate (BER) performance of optical wireless communication (OWC) systems employing a detector array in the presence of turbulence. Here, considering the gamma-gamma turbulence model, we propose a blind estimation scheme that provides the closed-form expression of the BER by exploiting the information of the data output of each pixel, which is based on the singular value decomposition of the sample matrix of the received signals after the code-matched filter. Instead of assuming spatially white additive noise, we consider the case where the noise spatial covariance matrix is unknown. The new method can be applied to either the single transmitter or the multi-transmitter cases. Simulation results for different Rytov variances are presented, which conform closely to the results of the proposed model.

  8. Novel ultra-wideband photonic signal generation and transmission featuring digital signal processing bit error rate measurements

    DEFF Research Database (Denmark)

    Gibbon, Timothy Braidwood; Yu, Xianbin; Tafur Monroy, Idelfonso

    2009-01-01

    We propose the novel generation of photonic ultra-wideband signals using an uncooled DFB laser. For the first time we experimentally demonstrate bit-for-bit DSP BER measurements for transmission of a 781.25 Mbit/s photonic UWB signal.......We propose the novel generation of photonic ultra-wideband signals using an uncooled DFB laser. For the first time we experimentally demonstrate bit-for-bit DSP BER measurements for transmission of a 781.25 Mbit/s photonic UWB signal....

  9. Bit-error-rate performance analysis of self-heterodyne detected radio-over-fiber links using phase and intensity modulation

    DEFF Research Database (Denmark)

    Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso

    2010-01-01

    We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self......-heterodyne receivers perform down-conversion of radio frequency (RF) subcarrier signal. A theoretical model including noise analysis is constructed to calculate the Q factor and estimate the BER performance. Furthermore, we experimentally validate our prediction in the theoretical modeling. Both the experimental...

  10. Average bit error rate performance analysis of subcarrier intensity modulated MRC and EGC FSO systems with dual branches over M distribution turbulence channels

    Science.gov (United States)

    Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang

    2015-07-01

    Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.

  11. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    Science.gov (United States)

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  12. Scintillation and bit error rate analysis of a phase-locked partially coherent flat-topped array laser beam in oceanic turbulence.

    Science.gov (United States)

    Yousefi, Masoud; Kashani, Fatemeh Dabbagh; Golmohammady, Shole; Mashal, Ahmad

    2017-12-01

    In this paper, the performance of underwater wireless optical communication (UWOC) links, which is made up of the partially coherent flat-topped (PCFT) array laser beam, has been investigated in detail. Providing high power, array laser beams are employed to increase the range of UWOC links. For characterization of the effects of oceanic turbulence on the propagation behavior of the considered beam, using the extended Huygens-Fresnel principle, an analytical expression for cross-spectral density matrix elements and a semi-analytical one for fourth-order statistical moment have been derived. Then, based on these expressions, the on-axis scintillation index of the mentioned beam propagating through weak oceanic turbulence has been calculated. Furthermore, in order to quantify the performance of the UWOC link, the average bit error rate (BER) has also been evaluated. The effects of some source factors and turbulent ocean parameters on the propagation behavior of the scintillation index and the BER have been studied in detail. The results of this investigation indicate that in comparison with the Gaussian array beam, when the source size of beamlets is larger than the first Fresnel zone, the PCFT array laser beam with the higher flatness order is found to have a lower scintillation index and hence lower BER. Specifically, in the sense of scintillation index reduction, using the PCFT array laser beams has a considerable benefit in comparison with the single PCFT or Gaussian laser beams and also Gaussian array beams. All the simulation results of this paper have been shown by graphs and they have been analyzed in detail.

  13. Ultra low bit-rate speech coding

    CERN Document Server

    Ramasubramanian, V

    2015-01-01

    "Ultra Low Bit-Rate Speech Coding" focuses on the specialized topic of speech coding at very low bit-rates of 1 Kbits/sec and less, particularly at the lower ends of this range, down to 100 bps. The authors set forth the fundamental results and trends that form the basis for such ultra low bit-rates to be viable and provide a comprehensive overview of various techniques and systems in literature to date, with particular attention to their work in the paradigm of unit-selection based segment quantization. The book is for research students, academic faculty and researchers, and industry practitioners in the areas of speech processing and speech coding.

  14. Very low bit rate video coding of moving targets

    Science.gov (United States)

    Garcia, Jose A.; Rodriguez-Sanchez, Rosa; Fdez-Valdivia, Joaquin; Martinez-Baena, Javier

    2006-03-01

    We propose a video coding scheme to improve moving-target detection at very low bit rate, based on two key features: energy-based quantizer formation, and optimized interquantizer and intraquantizer prioritization. Rational Embedded Wavelet Video Coding (REVIC) is a fully implemented software video codec of low complexity and without motion compensated filtering to provide additional simplicity, adaptivity, and error resilience. It is shown to be quite effective in video coding of moving targets (e.g., military vehicles) at very low bit rates, while retaining the attributes of complete embeddedness for progressive transmission and scalability by fidelity and resolution. The proposed coding technique improves the explanatory power of decoded sequences (to achieve maximum target detection versus bit-rate performance) for a video compression system. The explanatory power of compressed sequences is important in surveillance applications, where trained video analysts may utilize decoded sequences to support decision processes in strategic, operational, and tactical tasks.

  15. Error Correcting Coding of Telemetry Information for Channel with Random Bit Inversions and Deletions

    Directory of Open Access Journals (Sweden)

    M. A. Elshafey

    2014-01-01

    Full Text Available This paper presents a method of error-correcting coding of digital information. Feature of this method is the treatment of cases of inversion and skip bits caused by a violation of the synchronization of the receiving and transmitting device or other factors. The article gives a brief overview of the features, characteristics, and modern methods of construction LDPC and convolutional codes, as well as considers a general model of the communication channel, taking into account the probability of bits inversion, deletion and insertion. The proposed coding scheme is based on a combination of LDPC coding and convolution coding. A comparative analysis of the proposed combined coding scheme and a coding scheme containing only LDPC coder is performed. Both of the two schemes have the same coding rate. Experiments were carried out on two models of communication channels at different probability values of bit inversion and deletion. The first model allows only random bit inversion, while the other allows both random bit inversion and deletion. In the experiments research and analysis of the delay decoding of convolutional coder is performed and the results of these experimental studies demonstrate the feasibility of planted coding scheme to improve the efficiency of data recovery that is transmitted over a communication channel with noises which allow random bit inversion and deletion without decreasing the coding rate.

  16. Digital sound: Subjective tests on low bit-rate codecs

    Science.gov (United States)

    Gilchrist, N. H. C.

    At the beginning of 1990, BBC Research Department tested four experimental high-quality low bit-rate audio codecs which were under development as part of the Eureka 147 Digital Audio Broadcasting project. The work involved preliminary listening tests to identify critical test material, followed by formal subjective tests to determine audio quality and error performance. The listeners could detect some loss of audio quality with all of the codecs using the most critical material. There were also indications that one of the codecs did not always reproduce the phantom sound sources in their correct position.

  17. Digital Signal Processing For Low Bit Rate TV Image Codecs

    Science.gov (United States)

    Rao, K. R.

    1987-06-01

    In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.

  18. A fixed/variable bit-rate data compression architecture

    Science.gov (United States)

    Zweigle, Gregary C.; Venbrux, Jack; Yeh, Pen-Shu

    1993-01-01

    A VLSI architecture for an adaptive data compression encoder capable of sustaining fixed or variable bit-rate output has been developed. There are three modes of operation: lossless with variable bit-rate, lossy with fixed bit-rate and lossy with variable bit-rate. For lossless encoding, the implementation is identical to the USES chip designed for Landsat 7. Obtaining a fixed bit-rate is achieved with a lossy DPCM algorithm using adaptive, nonuniform scalar quantization. In lossy mode, variable bit-rate coding uses the lossless sections of the encoder for post-DPCM entropy coding. The encoder shows excellent compression performance in comparison to other current data compression techniques. No external tables or memory are required for operation.

  19. Improved Bit Rate Control for Real-Time MPEG Watermarking

    Directory of Open Access Journals (Sweden)

    Pranata Sugiri

    2004-01-01

    Full Text Available The alteration of compressed video bitstream due to embedding of digital watermark tends to produce unpredictable video bit rate variations which may in turn lead to video playback buffer overflow/underflow or transmission bandwidth violation problems. This paper presents a novel bit rate control technique for real-time MPEG watermarking applications. In our experiments, spread spectrum watermarks are embedded in the quantized DCT domain without requantization and motion reestimation to achieve fast watermarking. The proposed bit rate control scheme evaluates the combined bit lengths of a set of multiple watermarked VLC codewords, and successively replaces watermarked VLC codewords having the largest increase in bit length with their corresponding unmarked VLC codewords until a target bit length is achieved. The proposed method offers flexibility and scalability, which are neglected by similar works reported in the literature. Experimental results show that the proposed bit rate control scheme is effective in meeting the bit rate targets and capable of improving the watermark detection robustness for different video contents compressed at different bit rates.

  20. Circuit and interconnect design for high bit-rate applications

    NARCIS (Netherlands)

    Veenstra, H.

    2006-01-01

    This thesis presents circuit and interconnect design techniques and design flows that address the most difficult and ill-defined aspects of the design of ICs for high bit-rate applications. Bottlenecks in interconnect design, circuit design and on-chip signal distribution for high bit-rate

  1. Beam-pointing error compensation method of phased array radar seeker with phantom-bit technology

    Directory of Open Access Journals (Sweden)

    Qiuqiu WEN

    2017-06-01

    Full Text Available A phased array radar seeker (PARS must be able to effectively decouple body motion and accurately extract the line-of-sight (LOS rate for target missile tracking. In this study, the real-time two-channel beam pointing error (BPE compensation method of PARS for LOS rate extraction is designed. The PARS discrete beam motion principium is analyzed, and the mathematical model of beam scanning control is finished. According to the principle of the antenna element shift phase, both the antenna element shift phase law and the causes of beam-pointing error under phantom-bit conditions are analyzed, and the effect of BPE caused by phantom-bit technology (PBT on the extraction accuracy of the LOS rate is examined. A compensation method is given, which includes coordinate transforms, beam angle margin compensation, and detector dislocation angle calculation. When the method is used, the beam angle margin in the pitch and yaw directions is calculated to reduce the effect of the missile body disturbance and to improve LOS rate extraction precision by compensating for the detector dislocation angle. The simulation results validate the proposed method.

  2. Optimal multitone bit allocation for fixed-rate video transmission over ADSL

    Science.gov (United States)

    Antonini, Marc; Moureaux, Jean-Marie; Lecuire, Vincent

    2002-01-01

    In this paper we propose a novel approach for the bit allocation performed in an ADSL modulator. This new method is based on the observation that the transmission speed using ADSL strongly depends on the distance between the central office and the subscriber's side and does not permit real-time transmission for high bitrate video on long distances. The algorithm we develop takes into account the characteristics of a video sequence and distributes the channel error according to visual sensitivity. This method involves variable transmission Bit Error Rate.

  3. Low Bit Rate Motion Video Coder/Decoder For Teleconferencing

    Science.gov (United States)

    Koga, T.; Niwa, K.; Iijima, Y.; Iinuma, K.

    1987-07-01

    This paper describes motion video compression transmission for teleconferencing at a subprimary rate, i.e., at 384 kbits/s, including audio signal through the integrated services digital network (ISDN) HO channel. A subprimary rate video coder/decoder (codec), NETEC-XV, is available commercially that can operate at any bit rate (in multiples of 64 kbits/s) from 384 to 2048 kbits/s. In this paper, new algorithms are described that have been very useful in lowering the bit rate to 384 kbits/s. These algorithms are (1) separation of moving and still parts, followed by encoding of the two parts using different sets of parameters, and (2) scene change detection and its application to encoding parameter control. According to a brief subjective evaluation, the codec provides good picture quality even at a transmission bit rate of 384 kbits/s.

  4. Euclidean Geometry Codes, minimum weight words and decodable error-patterns using bit-flipping

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn; Jonsson, Bergtor

    2005-01-01

    We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns.......We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns....

  5. Bit rates in audio source coding

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.

    1992-01-01

    The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a

  6. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  7. Comodulation masking release in bit-rate reduction systems

    DEFF Research Database (Denmark)

    Vestergaard, Martin David; Rasmussen, Karsten Bo; Poulsen, Torben

    1999-01-01

    It has been suggested that the level dependence of the upper masking slope be utilized in perceptual models in bit-rate reduction systems. However, comodulation masking release (CMR) phenomena lead to a reduction of the masking effect when a masker and a probe signal are amplitude modulated...... with the same frequency. In bit-rate reduction systems the masker would be the audio signal and the probe signal would represent the quantization noise. Masking curves have been determined for sinusoids and 1-Bark-wide noise maskers in order to investigate the risk of CMR, when quantizing depths are fixed...... in accordance with psycho-acoustical principles. Masker frequencies of 500 Hz, 1 kHz, and 2 kHz have been investigated, and the masking of pure tone probes has been determined in the first four 1/3 octaves above the masker. Modulation frequencies between 6 and 20 Hz were used with a modulation depth of 0...

  8. Comodulation masking release in bit-rate reduction systems

    DEFF Research Database (Denmark)

    Vestergaard, Martin D.; Rasmussen, Karsten Bo; Poulsen, Torben

    1999-01-01

    It has been suggested that the level dependence of the upper masking slopebe utilised in perceptual models in bit-rate reduction systems. However,comodulation masking release (CMR) phenomena lead to a reduction of themasking effect when a masker and a probe signal are amplitude modulated withthe...... same frequency. In bit-rate reduction systems the masker would be theaudio signal and the probe signal would represent the quantization noise.Masking curves have been determined for sinusoids and 1-Bark-wide noisemaskers in order to investigate the risk of CMR, when quantizing depths arefixed...... in accordance with psycho-acoustical principles. Masker frequencies of500Hz, 1kHz and 2kHz have been investigated, and the masking of pure toneprobes has been determined in the first four 1/3 octaves above the masker.Modulation frequencies between 6Hz and 20Hz were used with a modulationdepth of 0.75. CMR of up...

  9. Narrowband (LPC-10) Vocoder Performance under Combined Effects of Random Bit Errors and Jet Aircraft Cabin Noise.

    Science.gov (United States)

    1983-12-01

    Environment 52 34. Comparison of Regression Lines Estimating Scores for the Sustention Intelligibility Feature vs Bit Error Rate for the DOD LPC-10 Vocoder in...both conditions, the feature "sibilation" obtained the highest scores, and the features "graveness" and " sustention " received the poorest scores, but...were under much greater impairment in the noise environment. Details of the variations in scores for sustention are shown in Figure 34, and, for

  10. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    Science.gov (United States)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  11. Biometric Quantization through Detection Rate Optimized Bit Allocation

    Directory of Open Access Journals (Sweden)

    C. Chen

    2009-01-01

    Full Text Available Extracting binary strings from real-valued biometric templates is a fundamental step in many biometric template protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Previous work has been focusing on the design of optimal quantization and coding for each single feature component, yet the binary string—concatenation of all coded feature components—is not optimal. In this paper, we present a detection rate optimized bit allocation (DROBA principle, which assigns more bits to discriminative features and fewer bits to nondiscriminative features. We further propose a dynamic programming (DP approach and a greedy search (GS approach to achieve DROBA. Experiments of DROBA on the FVC2000 fingerprint database and the FRGC face database show good performances. As a universal method, DROBA is applicable to arbitrary biometric modalities, such as fingerprint texture, iris, signature, and face. DROBA will bring significant benefits not only to the template protection systems but also to the systems with fast matching requirements or constrained storage capability.

  12. Biometric Quantization through Detection Rate Optimized Bit Allocation

    Science.gov (United States)

    Chen, C.; Veldhuis, R. N. J.; Kevenaar, T. A. M.; Akkermans, A. H. M.

    2009-12-01

    Extracting binary strings from real-valued biometric templates is a fundamental step in many biometric template protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Previous work has been focusing on the design of optimal quantization and coding for each single feature component, yet the binary string—concatenation of all coded feature components—is not optimal. In this paper, we present a detection rate optimized bit allocation (DROBA) principle, which assigns more bits to discriminative features and fewer bits to nondiscriminative features. We further propose a dynamic programming (DP) approach and a greedy search (GS) approach to achieve DROBA. Experiments of DROBA on the FVC2000 fingerprint database and the FRGC face database show good performances. As a universal method, DROBA is applicable to arbitrary biometric modalities, such as fingerprint texture, iris, signature, and face. DROBA will bring significant benefits not only to the template protection systems but also to the systems with fast matching requirements or constrained storage capability.

  13. Influence of transmission bit rate on performance of optical fibre communication systems with direct modulation of laser diodes

    International Nuclear Information System (INIS)

    Ahmed, Moustafa F

    2009-01-01

    This paper reports on the influence of the transmission bit rate on the performance of optical fibre communication systems employing laser diodes subjected to high-speed direct modulation. The performance is evaluated in terms of the bit error rate (BER) and power penalty associated with increasing the transmission bit rate while keeping the transmission distance. The study is based on numerical analysis of the stochastic rate equations of the laser diode and takes into account noise mechanisms in the receiver. Correlation between BER and the Q-parameter of the received signal is presented. The relative contributions of the transmitter noise and the circuit and shot noises of the receiver to BER are quantified as functions of the transmission bit rate. The results show that the power penalty at BER = 10 -9 required to keep the transmission distance increases moderately with the increase in the bit rate near 1 Gbps and at high bias currents. In this regime, the shot noise is the main contributor to BER. At higher bit rates and lower bias currents, the power penalty increases remarkably, which comes mainly from laser noise induced by the pseudorandom bit-pattern effect.

  14. Optical Switching and Bit Rates of 40 Gbit/s and above

    DEFF Research Database (Denmark)

    Ackaert, A.; Demester, P.; O'Mahony, M.

    2003-01-01

    Optical switching in WDM networks introduces additional aspects to the choice of single channel bit rates compared to WDM transmission systems. The mutual impact of optical switching and bit rates of 40 Gbps and above is discussed....

  15. A Novel Rate Control Scheme for Constant Bit Rate Video Streaming

    Directory of Open Access Journals (Sweden)

    Venkata Phani Kumar M

    2015-08-01

    Full Text Available In this paper, a novel rate control mechanism is proposed for constant bit rate video streaming. The initial quantization parameter used for encoding a video sequence is determined using the average spatio-temporal complexity of the sequence, its resolution and the target bit rate. Simple linear estimation models are then used to predict the number of bits that would be necessary to encode a frame for a given complexity and quantization parameter. The experimental results demonstrate that our proposed rate control mechanism significantly outperforms the existing rate control scheme in the Joint Model (JM reference software in terms of Peak Signal to Noise Ratio (PSNR and consistent perceptual visual quality while achieving the target bit rate. Furthermore, the proposed scheme is validated through implementation on a miniature test-bed.

  16. Detecting bit-flip errors in a logical qubit using stabilizer measurements

    Science.gov (United States)

    Ristè, D.; Poletto, S.; Huang, M.-Z.; Bruno, A.; Vesterinen, V.; Saira, O.-P.; DiCarlo, L.

    2015-01-01

    Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements. PMID:25923318

  17. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  18. Video Synchronization With Bit-Rate Signals and Correntropy Function

    Directory of Open Access Journals (Sweden)

    Igor Pereira

    2017-09-01

    Full Text Available We propose an approach for the synchronization of video streams using correntropy. Essentially, the time offset is calculated on the basis of the instantaneous transfer rates of the video streams that are extracted in the form of a univariate signal known as variable bit-rate (VBR. The state-of-the-art approach uses a window segmentation strategy that is based on consensual zero-mean normalized cross-correlation (ZNCC. This strategy has an elevated computational complexity, making its application to synchronizing online data streaming difficult. Hence, our proposal uses a different window strategy that, together with the correntropy function, allows the synchronization to be performed for online applications. This provides equivalent synchronization scores with a rapid offset determination as the streams come into the system. The efficiency of our approach has been verified through experiments that demonstrate its viability with values that are as precise as those obtained by ZNCC. The proposed approach scored 81 % in time reference classification against the equivalent 81 % of the state-of-the-art approach, requiring much less computational power.

  19. Video Synchronization With Bit-Rate Signals and Correntropy Function.

    Science.gov (United States)

    Pereira, Igor; Silveira, Luiz F; Gonçalves, Luiz

    2017-09-04

    We propose an approach for the synchronization of video streams using correntropy. Essentially, the time offset is calculated on the basis of the instantaneous transfer rates of the video streams that are extracted in the form of a univariate signal known as variable bit-rate (VBR). The state-of-the-art approach uses a window segmentation strategy that is based on consensual zero-mean normalized cross-correlation (ZNCC). This strategy has an elevated computational complexity, making its application to synchronizing online data streaming difficult. Hence, our proposal uses a different window strategy that, together with the correntropy function, allows the synchronization to be performed for online applications. This provides equivalent synchronization scores with a rapid offset determination as the streams come into the system. The efficiency of our approach has been verified through experiments that demonstrate its viability with values that are as precise as those obtained by ZNCC. The proposed approach scored 81 % in time reference classification against the equivalent 81 % of the state-of-the-art approach, requiring much less computational power.

  20. 50 nm AlxOy resistive random access memory array program bit error reduction and high temperature operation

    Science.gov (United States)

    Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken

    2014-01-01

    In order to decrease program bit error rate (BER) of array-level operation in AlxOy resistive random access memory (ReRAM), program BERs are compared by using 4 × 4 basic set and reset with verify methods on multiple 1024-bit-pages in 50 nm, mega-bit class ReRAM arrays. Further, by using an optimized reset method, 8.5% total BER reduction is obtained after 104 write cycles due to avoiding under-reset or weak reset and ameliorating over-reset caused wear-out. Then, under-set and over-set are analyzed by tuning the set word line voltage (VWL) of ±0.1 V. Moderate set current shows the best total BER. Finally, 2000 write cycles are applied at 125 and 25 °C, respectively. Reset BER increases 28.5% at 125 °C whereas set BER has little difference, by using the optimized reset method. By applying write cycles over a 25 to 125 to 25 °C temperature variation, immediate reset BER change can be found after the temperature transition.

  1. Enhanced bit rate-distance product impulse radio ultra-wideband over fiber link

    DEFF Research Database (Denmark)

    Rodes Lopez, Roberto; Jensen, Jesper Bevensee; Caballero Jambrina, Antonio

    2010-01-01

    We report on a record distance and bit rate-wireless impulse radio (IR) ultra-wideband (UWB) link with combined transmission over a 20 km long fiber link. We are able to improve the compliance with the regulated frequency emission mask and achieve bit rate-distance products as high as 16 Gbit/s·m....

  2. Application of time-hopping UWB range-bit rate performance in the UWB sensor networks

    NARCIS (Netherlands)

    Nascimento, J.R.V. do; Nikookar, H.

    2008-01-01

    In this paper, the achievable range-bit rate performance is evaluated for Time-Hopping (TH) UWB networks complying with the FCC outdoor emission limits in the presence of Multiple Access Interference (MAI). Application of TH-UWB range-bit rate performance is presented for UWB sensor networks.

  3. Up to 20 Gbit/s bit-rate transparent integrated interferometric wavelength converter

    DEFF Research Database (Denmark)

    Jørgensen, Carsten; Danielsen, Søren Lykke; Hansen, Peter Bukhave

    1996-01-01

    We present a compact and optimised multiquantum-well based, integrated all-active Michelson interferometer for 26 Gbit/s optical wavelength conversion. Bit-rate transparent operation is demonstrated with a conversion penalty well below 0.5 dB at bit-rates ranging from 622 Mbit/s to 20 Gbit/s....

  4. Warped Discrete Cosine Transform-Based Low Bit-Rate Block Coding Using Image Downsampling

    Directory of Open Access Journals (Sweden)

    Ertürk Sarp

    2007-01-01

    Full Text Available This paper presents warped discrete cosine transform (WDCT-based low bit-rate block coding using image downsampling. While WDCT aims to improve the performance of conventional DCT by frequency warping, the WDCT has only been applicable to high bit-rate coding applications because of the overhead required to define the parameters of the warping filter. Recently, low bit-rate block coding based on image downsampling prior to block coding followed by upsampling after the decoding process is proposed to improve the compression performance for low bit-rate block coders. This paper demonstrates that a superior performance can be achieved if WDCT is used in conjunction with image downsampling-based block coding for low bit-rate applications.

  5. On Bit Error Probability and Power Optimization in Multihop Millimeter Wave Relay Systems

    KAUST Repository

    Chelli, Ali

    2018-01-15

    5G networks are expected to provide gigabit data rate to users via millimeter-wave (mmWave) communication technology. One of the major problem faced by mmWaves is that they cannot penetrate buildings. In this paper, we utilize multihop relaying to overcome the signal blockage problem in mmWave band. The multihop relay network comprises a source device, several relay devices and a destination device and uses device-todevice communication. Relay devices redirect the source signal to avoid the obstacles existing in the propagation environment. Each device amplifies and forwards the signal to the next device, such that a multihop link ensures the connectivity between the source device and the destination device. We consider that the relay devices and the destination device are affected by external interference and investigate the bit error probability (BEP) of this multihop mmWave system. Note that the study of the BEP allows quantifying the quality of communication and identifying the impact of different parameters on the system reliability. In this way, the system parameters, such as the powers allocated to different devices, can be tuned to maximize the link reliability. We derive exact expressions for the BEP of M-ary quadrature amplitude modulation (M-QAM) and M-ary phase-shift keying (M-PSK) in terms of multivariate Meijer’s G-function. Due to the complicated expression of the exact BEP, a tight lower-bound expression for the BEP is derived using a novel Mellin-approach. Moreover, an asymptotic expression for the BEP at high SIR regime is derived and used to determine the diversity and the coding gain of the system. Additionally, we optimize the power allocation at different devices subject to a sum power constraint such that the BEP is minimized. Our analysis reveals that optimal power allocation allows achieving more than 3 dB gain compared to the equal power allocation.This research work can serve as a framework for designing and optimizing mmWave multihop

  6. An Alternative Method to Compute the Bit Error Probability of Modulation Schemes Subject to Nakagami- Fading

    Directory of Open Access Journals (Sweden)

    Madeiro Francisco

    2010-01-01

    Full Text Available Abstract This paper presents an alternative method for determining exact expressions for the bit error probability (BEP of modulation schemes subject to Nakagami- fading. In this method, the Nakagami- fading channel is seen as an additive noise channel whose noise is modeled as the ratio between Gaussian and Nakagami- random variables. The method consists of using the cumulative density function of the resulting noise to obtain closed-form expressions for the BEP of modulation schemes subject to Nakagami- fading. In particular, the proposed method is used to obtain closed-form expressions for the BEP of -ary quadrature amplitude modulation ( -QAM, -ary pulse amplitude modulation ( -PAM, and rectangular quadrature amplitude modulation ( -QAM under Nakagami- fading. The main contribution of this paper is to show that this alternative method can be used to reduce the computational complexity for detecting signals in the presence of fading.

  7. Low dose rate gamma ray induced loss and data error rate of multimode silica fibre links

    International Nuclear Information System (INIS)

    Breuze, G.; Fanet, H.; Serre, J.

    1993-01-01

    Fiber optics data transmission from numerous multiplexed sensors, is potentially attractive for nuclear plant applications. Multimode silica fiber behaviour during steady state gamma ray exposure is studied as a joint programme between LETI CE/SACLAY and EDF Renardieres: transmitted optical power and bit error rate have been measured on a 100 m optical fiber

  8. Towards the generation of random bits at terahertz rates based on a chaotic semiconductor laser

    International Nuclear Information System (INIS)

    Kanter, Ido; Aviad, Yaara; Reidler, Igor; Cohen, Elad; Rosenbluh, Michael

    2010-01-01

    Random bit generators (RBGs) are important in many aspects of statistical physics and crucial in Monte-Carlo simulations, stochastic modeling and quantum cryptography. The quality of a RBG is measured by the unpredictability of the bit string it produces and the speed at which the truly random bits can be generated. Deterministic algorithms generate pseudo-random numbers at high data rates as they are only limited by electronic hardware speed, but their unpredictability is limited by the very nature of their deterministic origin. It is widely accepted that the core of any true RBG must be an intrinsically non-deterministic physical process, e.g. measuring thermal noise from a resistor. Owing to low signal levels, such systems are highly susceptible to bias, introduced by amplification, and to small nonrandom external perturbations resulting in a limited generation rate, typically less than 100M bit/s. We present a physical random bit generator, based on a chaotic semiconductor laser, having delayed optical feedback, which operates reliably at rates up to 300Gbit/s. The method uses a high derivative of the digitized chaotic laser intensity and generates the random sequence by retaining a number of the least significant bits of the high derivative value. The method is insensitive to laser operational parameters and eliminates the necessity for all external constraints such as incommensurate sampling rates and laser external cavity round trip time. The randomness of long bit strings is verified by standard statistical tests.

  9. Towards the generation of random bits at terahertz rates based on a chaotic semiconductor laser

    Science.gov (United States)

    Kanter, Ido; Aviad, Yaara; Reidler, Igor; Cohen, Elad; Rosenbluh, Michael

    2010-06-01

    Random bit generators (RBGs) are important in many aspects of statistical physics and crucial in Monte-Carlo simulations, stochastic modeling and quantum cryptography. The quality of a RBG is measured by the unpredictability of the bit string it produces and the speed at which the truly random bits can be generated. Deterministic algorithms generate pseudo-random numbers at high data rates as they are only limited by electronic hardware speed, but their unpredictability is limited by the very nature of their deterministic origin. It is widely accepted that the core of any true RBG must be an intrinsically non-deterministic physical process, e.g. measuring thermal noise from a resistor. Owing to low signal levels, such systems are highly susceptible to bias, introduced by amplification, and to small nonrandom external perturbations resulting in a limited generation rate, typically less than 100M bit/s. We present a physical random bit generator, based on a chaotic semiconductor laser, having delayed optical feedback, which operates reliably at rates up to 300Gbit/s. The method uses a high derivative of the digitized chaotic laser intensity and generates the random sequence by retaining a number of the least significant bits of the high derivative value. The method is insensitive to laser operational parameters and eliminates the necessity for all external constraints such as incommensurate sampling rates and laser external cavity round trip time. The randomness of long bit strings is verified by standard statistical tests.

  10. A multi-bit rate interface movement compensated multimode coder for video conferencing

    Science.gov (United States)

    1982-04-01

    This report describes a multi-bit rate video coder for DARPA video conferencing applications. The coder can operate at any preselected transmission bit rate ranging from 1.5 Mb/s to 64 kb/s. The proposed National Command Authority Teleconferencing System (NCATS) is designed to connect several conferencing sites. The system provides shared audio, video and graphic spaces. The video conferencing system communicates dynamic images of participants to different conferencing sites. The system is designed to operate under different bandwidth constraints. Under emergency situations communications bandwidth can be drastically reduced to allow only for 64 kb/s to carry out the video conferencing system. Under normal conditions larger channel capacity is available for this service. In order to accommodate the above requirements, a video codec that can operate at different transmission bit rates is needed. This allows for upgrading of picture quality when there is sufficient bandwidth and a graceful reduction of picture quality under severe bandwidth limitations. The NTSC colour video signal sampled at 14.3 MHz (4 times the colour subcarrier frequency) and uniformly quantized to 8 bits per picture element, requires a transmission bit rate of 114 Mb/s. Such a high bit rate is economically prohibitive especially for video conferencing applications. In order to reduce the transmission bit rate, redundant information in the signal has to be removed and the specific video conferencing environment has to be exploited.

  11. Low-Bit Rate Feedback Strategies for Iterative IA-Precoded MIMO-OFDM-Based Systems

    Science.gov (United States)

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge. PMID:24678274

  12. Low-Bit Rate Feedback Strategies for Iterative IA-Precoded MIMO-OFDM-Based Systems

    Directory of Open Access Journals (Sweden)

    Sara Teodoro

    2014-01-01

    Full Text Available Interference alignment (IA is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT to the base station (BS, which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge.

  13. Digital PSK to BiO-L demodulator for 2 sup nx(bit rate) carrier

    Science.gov (United States)

    Shull, T. A.

    1979-01-01

    A phase shift key (PSK) to BiO-L demodulator which uses standard digital integrated circuits is discussed. The demodulator produces NRZ-L, bit clock, and BiO-L outputs from digital PSK input signals for which the carrier is a 2 to the Nth multiple of the bit rate. Various bit and carrier rates which are accommodated by changing various component values within the demodulator are described. The use of the unit for sinusoidal inputs as well as digital inputs is discussed.

  14. Re-use of Low Bandwidth Equipment for High Bit Rate Transmission Using Signal Slicing Technique

    DEFF Research Database (Denmark)

    Wagner, Christoph; Spolitis, S.; Vegas Olmos, Juan José

    : Massive fiber-to-the-home network deployment requires never ending equipment upgrades operating at higher bandwidth. We show effective signal slicing method, which can reuse low bandwidth opto-electronical components for optical communications at higher bit rates.......: Massive fiber-to-the-home network deployment requires never ending equipment upgrades operating at higher bandwidth. We show effective signal slicing method, which can reuse low bandwidth opto-electronical components for optical communications at higher bit rates....

  15. Increasing the bit rate in OCDMA systems using pulse position modulation techniques.

    Science.gov (United States)

    Arbab, Vahid R; Saghari, Poorya; Haghi, Mahta; Ebrahimi, Paniz; Willner, Alan E

    2007-09-17

    We have experimentally demonstrated two novel pulse position modulation techniques, namely Double Pulse Position Modulation (2-PPM) and Differential Pulse Position Modulation (DPPM) in Time-Wavelength OCDMA systems that will operate at a higher bit rate compared to traditional OOK-OCDMA systems with the same bandwidth. With 2-PPM technique, the number of active users will be more than DPPM while their bit rate is almost the same. Both techniques provide variable quality of service in OCDMA networks.

  16. Modeling of alpha-particle-induced soft error rate in DRAM

    International Nuclear Information System (INIS)

    Shin, H.

    1999-01-01

    Alpha-particle-induced soft error in 256M DRAM was numerically investigated. A unified model for alpha-particle-induced charge collection and a soft-error-rate simulator (SERS) was developed. The author investigated the soft error rate of 256M DRAM and identified the bit-bar mode as one of dominant modes for soft error. In addition, for the first time, it was found that trench-oxide depth has a significant influence on soft error rate, and it should be determined by the tradeoff between soft error rate and cell-to-cell isolation characteristics

  17. High bit rate optical transmission using midspan spectral inversion ...

    African Journals Online (AJOL)

    صﺧﻟﻣ. رﺎﺛآ نﻋ ﺔﺟﺗﺎﻧﻟا ءﺎﺿوﺿﻟا ﺔﻟﮐﺷﻣﻟ نﮐﻣﯾ. ﻟا. ﺔﻟﺣرﻣ. ﻟا. تﺗﺷﺗﻟاو ﺔﯾطﺧ رﯾﻐ. اﻟﻟ. نﻣ دﺣﻟا ﻲﻧو. ﺔﻓﺎﺳﻣ. ثﺑﻟا. لدﻌﻣو ،. ﻲﺋﺎﻧﺛﻟا. ) bit. (. ﺔﻟﺣرﻣﻟ. -. لوﺣﺗﻟا. -. لﯾدﻌﺗﻟا تﻻﺎﺣ لﻔﻘﻟا .ﻓ. ﻲ. ذھا. ﻟا. لﺎﻘﻣ. ،. ﺎﻧﺳرد. ﺿﯾوﻌﺗ. ﻵا تﺎ. رﺎﺛ. ﻟا. ﺔﯾطﺧﻟاو ﺔﯾطﺧﻟا رﯾﻐ. ﻋن. قﯾرط. ﺔﻟﺣرﻣﻟا دادﺗﻣا فﺻﺗﻧﻣ. ﺔﯾرﺻﺑﻟا. ﺔﻘﻓرﻣﻟا. ) OPC .(. ،ﻻوأ. ﻧ. رﺎﺛآ رﮭظ. ﻟا. تﺗﺷﺗ. ا. ظﻧ ﻲﻓ ﻲﻧوﻟ. ﺎم. OD8PSK. ) ﺔﯾﻟﺿﺎﻔﺗ. ىوﺗﺳﻣﻟا ﺔﯾﺋوﺿ. -8. ﺔﻟﺣرﻣ. -. لوﺣﺗﻟا. -. لﻔﻘﻟا. (،.

  18. Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems.

    Science.gov (United States)

    Huang, Shih-Chia; Chen, Bo-Hao

    2013-12-01

    Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76

  19. High-bit rate ultra-compact light routing with mode-selective on-chip nanoantennas.

    Science.gov (United States)

    Guo, Rui; Decker, Manuel; Setzpfandt, Frank; Gai, Xin; Choi, Duk-Yong; Kiselev, Roman; Chipouline, Arkadi; Staude, Isabelle; Pertsch, Thomas; Neshev, Dragomir N; Kivshar, Yuri S

    2017-07-01

    Optical nanoantennas provide a promising pathway toward advanced manipulation of light waves, such as directional scattering, polarization conversion, and fluorescence enhancement. Although these functionalities were mainly studied for nanoantennas in free space or on homogeneous substrates, their integration with optical waveguides offers an important "wired" connection to other functional optical components. Taking advantage of the nanoantenna's versatility and unrivaled compactness, their imprinting onto optical waveguides would enable a marked enhancement of design freedom and integration density for optical on-chip devices. Several examples of this concept have been demonstrated recently. However, the important question of whether nanoantennas can fulfill functionalities for high-bit rate signal transmission without degradation, which is the core purpose of many integrated optical applications, has not yet been experimentally investigated. We introduce and investigate directional, polarization-selective, and mode-selective on-chip nanoantennas integrated with a silicon rib waveguide. We demonstrate that these nanoantennas can separate optical signals with different polarizations by coupling the different polarizations of light vertically to different waveguide modes propagating into opposite directions. As the central result of this work, we show the suitability of this concept for the control of optical signals with ASK (amplitude-shift keying) NRZ (nonreturn to zero) modulation [10 Gigabit/s (Gb/s)] without significant bit error rate impairments. Our results demonstrate that waveguide-integrated nanoantennas have the potential to be used as ultra-compact polarization-demultiplexing on-chip devices for high-bit rate telecommunication applications.

  20. Adaptive Bit Rate Video Streaming Through an RF/Free Space Optical Laser Link

    Directory of Open Access Journals (Sweden)

    A. Akbulut

    2010-06-01

    Full Text Available This paper presents a channel-adaptive video streaming scheme which adjusts video bit rate according to channel conditions and transmits video through a hybrid RF/free space optical (FSO laser communication system. The design criteria of the FSO link for video transmission to 2.9 km distance have been given and adaptive bit rate video streaming according to the varying channel state over this link has been studied. It has been shown that the proposed structure is suitable for uninterrupted transmission of videos over the hybrid wireless network with reduced packet delays and losses even when the received power is decreased due to weather conditions.

  1. Low bit rate coding of Earth science images

    Science.gov (United States)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1993-01-01

    In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.

  2. Estimation of entropy rate in a fast physical random-bit generator using a chaotic semiconductor laser with intrinsic noise.

    Science.gov (United States)

    Mikami, Takuya; Kanno, Kazutaka; Aoyama, Kota; Uchida, Atsushi; Ikeguchi, Tohru; Harayama, Takahisa; Sunada, Satoshi; Arai, Ken-ichi; Yoshimura, Kazuyuki; Davis, Peter

    2012-01-01

    We analyze the time for growth of bit entropy when generating nondeterministic bits using a chaotic semiconductor laser model. The mechanism for generating nondeterministic bits is modeled as a 1-bit sampling of the intensity of light output. Microscopic noise results in an ensemble of trajectories whose bit entropy increases with time. The time for the growth of bit entropy, called the memory time, depends on both noise strength and laser dynamics. It is shown that the average memory time decreases logarithmically with increase in noise strength. It is argued that the ratio of change in average memory time with change in logarithm of noise strength can be used to estimate the intrinsic dynamical entropy rate for this method of random bit generation. It is also shown that in this model the entropy rate corresponds to the maximum Lyapunov exponent.

  3. SpecBit, DecayBit and PrecisionBit: GAMBIT modules for computing mass spectra, particle decay rates and precision observables

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin

    2018-01-01

    We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.

  4. Scalable In-Band Optical Notch-Filter Labeling for Ultrahigh Bit Rate Optical Packet Switching

    DEFF Research Database (Denmark)

    Medhin, Ashenafi Kiros; Galili, Michael; Oxenløwe, Leif Katsuo

    2014-01-01

    We propose a scalable in-band optical notch-filter labeling scheme for optical packet switching of high-bit-rate data packets. A detailed characterization of the notch-filter labeling scheme and its effect on the quality of the data packet is carried out in simulation and verified by experimental...

  5. Power consumption analysis of constant bit rate video transmission over 3G networks

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Belyaev, Evgeny; Wang, Le

    2012-01-01

    This paper presents an analysis of the power consumption of video data transmission with constant bit rate over 3G mobile wireless networks. The work includes the description of the radio resource control transition state machine in 3G networks, followed by a detailed power consumption analysis...

  6. Power consumption analysis of constant bit rate data transmission over 3G mobile wireless networks

    DEFF Research Database (Denmark)

    Wang, Le; Ukhanova, Ann; Belyaev, Evgeny

    2011-01-01

    This paper presents the analysis of the power consumption of data transmission with constant bit rate over 3G mobile wireless networks. Our work includes the description of the transition state machine in 3G networks, followed by the detailed energy consumption analysis and measurement results...

  7. Low bit rates image compression via adaptive block downsampling and super resolution

    Science.gov (United States)

    Chen, Honggang; He, Xiaohai; Ma, Minglang; Qing, Linbo; Teng, Qizhi

    2016-01-01

    A low bit rates image compression framework based on adaptive block downsampling and super resolution (SR) was presented. At the encoder side, the downsampling mode and quantization mode of each 16×16 macroblock are determined adaptively using the ratio distortion optimization method, then the downsampled macroblocks are compressed by the standard JPEG. At the decoder side, the sparse representation-based SR algorithm is applied to recover full resolution macroblocks from decoded blocks. The experimental results show that the proposed framework outperforms the standard JPEG and the state-of-the-art downsampling-based compression methods in terms of both subjective and objective comparisons. Specifically, the peak signal-to-noise ratio gain of the proposed framework over JPEG reaches up to 2 to 4 dB at low bit rates, and the critical bit rate to JPEG is raised to about 2.3 bits per pixel. Moreover, the proposed framework can be extended to other block-based compression schemes.

  8. Research on bit synchronization based on GNSS

    Science.gov (United States)

    Yu, Huanran; Liu, Yi-jun

    2017-05-01

    The signals transmitted by GPS satellites are divided into three components: carrier, pseudocode and data code. The processes of signal acquisition are acquisition, tracking, bit synchronization, frame synchronization, navigation message extraction, observation extraction and position speed calculation, among which bit synchronization is of greatest importance. The accuracy of bit synchronization and the shortening of bit synchronization time can help us to use satellite to realize positioning and acquire the information transmitted by satellite signals more accurately. Even under the condition of weak signal, how to improve bit synchronization performance is what we need to research. We adopt a method of polymorphic energy accumulation minima so as to find the bit synchronization point, as well as complete the computer simulation to conclude that under the condition of extremely weak signal power, this method still has superior synchronization performance, which can achieve high bit edge detection rate and the optimal bit error rate.

  9. Average bit error probability of binary coherent signaling over generalized fading channels subject to additive generalized gaussian noise

    KAUST Repository

    Soury, Hamza

    2012-06-01

    This letter considers the average bit error probability of binary coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closed form expression in terms of the Fox\\'s H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading and Nakagami-m fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters. © 2012 IEEE.

  10. 3D video bit rate adaptation decision taking using ambient illumination context

    Directory of Open Access Journals (Sweden)

    G. Nur Yilmaz

    2014-09-01

    Full Text Available 3-Dimensional (3D video adaptation decision taking is an open field in which not many researchers have carried out investigations yet compared to 3D video display, coding, etc. Moreover, utilizing ambient illumination as an environmental context for 3D video adaptation decision taking has particularly not been studied in literature to date. In this paper, a user perception model, which is based on determining perception characteristics of a user for a 3D video content viewed under a particular ambient illumination condition, is proposed. Using the proposed model, a 3D video bit rate adaptation decision taking technique is developed to determine the adapted bit rate for the 3D video content to maintain 3D video quality perception by considering the ambient illumination condition changes. Experimental results demonstrate that the proposed technique is capable of exploiting the changes in ambient illumination level to use network resources more efficiently without sacrificing the 3D video quality perception.

  11. Bit rate maximization for multicast LP-OFDM systems in PLC context

    OpenAIRE

    Maiga , Ali; Baudais , Jean-Yves; Hélard , Jean-François

    2009-01-01

    ISBN: 978-88-900984-8-2.; International audience; In this paper, we propose a new resource allocation algorithm based on linear precoding technique for multicast OFDM systems. Linear precoding technique applied to OFDM systems has already proved its ability to significantly increase the system throughput in a powerline communication (PLC) context. Simulations through PLC channels show that this algorithm outperforms the classical multicast method (up to 7.3% bit rate gain) and gives better pe...

  12. Temporal Masking for Bit-rate Reduction in Audio Codec Based on Frequency Domain Linear Prediction

    OpenAIRE

    Ganapathy, Sriram; Motlicek, Petr; Hermansky, Hynek; Garudadri, Harinath

    2008-01-01

    Audio coding based on Frequency Domain Linear Prediction (FDLP) uses auto-regressive model to approximate Hilbert envelopes in frequency sub-bands for relatively long temporal segments. Although the basic technique achieves good quality of the reconstructed signal, there is a need for improving the coding efficiency. In this paper, we present a novel method for the application of temporal masking to reduce the bit-rate in a FDLP based codec. Temporal masking refers to the hearing phenomenon, ...

  13. 16-bit error detection and correction (EDAC) controller design using FPGA for critical memory applications

    International Nuclear Information System (INIS)

    Misra, M.K.; Sridhar, N.; Krishnakumar, B.; Ilango Sambasivan, S.

    2002-01-01

    Full text: Complex electronic systems require the utmost reliability, especially when the storage and retrieval of critical data demands faultless operation, the system designer must strive for the highest reliability possible. Extra effort must be expended to achieve this reliability. Fortunately, not all systems must operate with these ultra reliability requirements. The majority of systems operate in an area where system failure is not hazardous. But the applications like nuclear reactors, medical and avionics are the areas where system failure may prove to have harsh consequences. High-density memories generate errors in their stored data due to external disturbances like power supply surges, system noise, natural radiation etc. These errors are called soft errors or transient errors, since they don't cause permanent damage to the memory cell. Hard errors may also occur on system memory boards. These hard errors occur if one RAM component or RAM cell fails and is stuck at either 0 or 1. Although less frequent, hard errors may cause a complete system failure. These are the major problems associated with memories

  14. Targeted employee training lowers registration error rate.

    Science.gov (United States)

    2005-05-01

    The registration error rate at the University of Alabama Health Services Foundation was running about 30%, and that was having a negative impact on both billing and collections operations. The health system created a process to identify the employees committing the most errors, and then individual workers were provided with additional information and training ro help improve their accuracy. The result was a dramatic performance improvement.

  15. Extending the lifetime of a quantum bit with error correction in superconducting circuits

    Science.gov (United States)

    Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S. M.; Jiang, L.; Mirrahimi, Mazyar; Devoret, M. H.; Schoelkopf, R. J.

    2016-08-01

    Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The ‘break-even’ point of QEC—at which the lifetime of a qubit exceeds the lifetime of the constituents of the system—has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0>f and |1>f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.

  16. Error-free 5.1 Tbit/s data generation on a single-wavelength channel using a 1.28 Tbaud symbol rate

    DEFF Research Database (Denmark)

    Mulvad, Hans Christian Hansen; Galili, Michael; Oxenløwe, Leif Katsuo

    2009-01-01

    We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER......We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER...

  17. Error-rate performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2011-09-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.

  18. On the average capacity and bit error probability of wireless communication systems

    KAUST Repository

    Yilmaz, Ferkan

    2011-12-01

    Analysis of the average binary error probabilities and average capacity of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function-based unified expression for both average binary error probabilities and average capacity of single and multiple link communication with maximal ratio combining. It is a matter to note that the generic unified expression offered in this paper can be easily calculated and that is applicable to a wide variety of fading scenarios, and the mathematical formalism is illustrated with the generalized Gamma fading distribution in order to validate the correctness of our newly derived results. © 2011 IEEE.

  19. Influence of the FEC Channel Coding on Error Rates and Picture Quality in DVB Baseband Transmission

    Directory of Open Access Journals (Sweden)

    T. Kratochvil

    2006-09-01

    Full Text Available The paper deals with the component analysis of DTV (Digital Television and DVB (Digital Video Broadcasting baseband channel coding. Used FEC (Forward Error Correction error-protection codes principles are shortly outlined and the simulation model applied in Matlab is presented. Results of achieved bit and symbol error rates and corresponding picture quality evaluation analysis are presented, including the evaluation of influence of the channel coding on transmitted RGB images and their noise rates related to MOS (Mean Opinion Score. Conclusion of the paper contains comparison of DVB channel codes efficiency.

  20. Novel MGF-based expressions for the average bit error probability of binary signalling over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2014-04-01

    The main idea in the moment generating function (MGF) approach is to alternatively express the conditional bit error probability (BEP) in a desired exponential form so that possibly multi-fold performance averaging is readily converted into a computationally efficient single-fold averaging - sometimes into a closed-form - by means of using the MGF of the signal-to-noise ratio. However, as presented in [1] and specifically indicated in [2] and also to the best of our knowledge, there does not exist an MGF-based approach in the literature to represent Wojnar\\'s generic BEP expression in a desired exponential form. This paper presents novel MGF-based expressions for calculating the average BEP of binary signalling over generalized fading channels, specifically by expressing Wojnar\\'s generic BEP expression in a desirable exponential form. We also propose MGF-based expressions to explore the amount of dispersion in the BEP for binary signalling over generalized fading channels.

  1. Efficient Region-of-Interest Scalable Video Coding with Adaptive Bit-Rate Control

    Directory of Open Access Journals (Sweden)

    Dan Grois

    2013-01-01

    Full Text Available This work relates to the regions-of-interest (ROI coding that is a desirable feature in future applications based on the scalable video coding, which is an extension of the H.264/MPEG-4 AVC standard. Due to the dramatic technological progress, there is a plurality of heterogeneous devices, which can be used for viewing a variety of video content. Devices such as smartphones and tablets are mostly resource-limited devices, which make it difficult to display high-quality content. Usually, the displayed video content contains one or more ROI(s, which should be adaptively selected from the preencoded scalable video bitstream. Thus, an efficient scalable ROI video coding scheme is proposed in this work, thereby enabling the extraction of the desired regions-of-interest and the adaptive setting of the desirable ROI location, size, and resolution. In addition, an adaptive bit-rate control is provided for the region-of-interest scalable video coding. The performance of the presented techniques is demonstrated and compared with the joint scalable video model reference software (JSVM 9.19, thereby showing significant bit-rate savings as a tradeoff for the relatively low PSNR degradation.

  2. Multicenter Assessment of Gram Stain Error Rates.

    Science.gov (United States)

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  3. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  4. A forward error correction technique using a high-speed, high-rate single chip codec

    Science.gov (United States)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    1989-01-01

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  5. A low bit rate FSK technique for SCPC satellite communication systems

    Science.gov (United States)

    Shpilka, Vladimir

    This paper concerns itself with the description and analysis of an application of FSK (frequency shift keying) communication system method, with which it is possible to eliminate the degrading effects of ground station as well as satellite contributed phase noise on very low bit rate communication systems. Typical transmitter and receiver block diagrams are provided. In situations where speed of information transmission is not of the greatest importance, but availability of DC power for the radio frequency transmitter is at premium, the above mentioned FSK technique would yield very low power communication systems, that could be used with the proposed MSAT satellite. Potential applications could include the development of handheld pocket sized messaging communicators and solar powered environmental data collection platforms. This class of earth terminals would operate at L-Band and would fall into the category of mobile earth terminals within the context of the MSAT system.

  6. Bit-rate-transparent optical RZ-to-NRZ format conversion based on linear spectral phase filtering

    DEFF Research Database (Denmark)

    Maram, Reza; Da Ros, Francesco; Guan, Pengyu

    2017-01-01

    We propose a novel and strikingly simple design for all-optical bit-rate-transparent RZ-to-NRZ conversion based on optical phase filtering. The proposed concept is experimentally validated through format conversion of a 640 Gbit/s coherent RZ signal to NRZ signal.......We propose a novel and strikingly simple design for all-optical bit-rate-transparent RZ-to-NRZ conversion based on optical phase filtering. The proposed concept is experimentally validated through format conversion of a 640 Gbit/s coherent RZ signal to NRZ signal....

  7. Bit rate and pulse width dependence of four-wave mixing of short optical pulses in semiconductor optical amplifiers

    DEFF Research Database (Denmark)

    Diez, S.; Mecozzi, A.; Mørk, Jesper

    1999-01-01

    We investigate the saturation properties of four-wave mixing of short optical pulses in a semiconductor optical amplifier. By varying the gain of the optical amplifier, we find a strong dependence of both conversion efficiency and signal-to-background ratio on pulse width and bit rate. In particu......We investigate the saturation properties of four-wave mixing of short optical pulses in a semiconductor optical amplifier. By varying the gain of the optical amplifier, we find a strong dependence of both conversion efficiency and signal-to-background ratio on pulse width and bit rate...

  8. 100G Flexible IM-DD 850 nm VCSEL Transceiver with Fractional Bit Rate Using Eight-Dimensional PAM

    DEFF Research Database (Denmark)

    Lu, Xiaofeng; Lyubopytov, Vladimir; Chorchos, Łukasz

    2017-01-01

    We demonstrate a novel optical transceiver scheme with a net flexible bit rate up to 100Gbit/s with 5 Gbit/s granularity, using an eight-dimensional modulation format family, and investigate its performance on capacity, reach, and power tolerance.......We demonstrate a novel optical transceiver scheme with a net flexible bit rate up to 100Gbit/s with 5 Gbit/s granularity, using an eight-dimensional modulation format family, and investigate its performance on capacity, reach, and power tolerance....

  9. A 1 GHz sample rate, 256-channel, 1-bit quantization, CMOS, digital correlator chip

    Science.gov (United States)

    Timoc, C.; Tran, T.; Wongso, J.

    1992-01-01

    This paper describes the development of a digital correlator chip with the following features: 1 Giga-sample/second; 256 channels; 1-bit quantization; 32-bit counters providing up to 4 seconds integration time at 1 GHz; and very low power dissipation per channel. The improvements in the performance-to-cost ratio of the digital correlator chip are achieved with a combination of systolic architecture, novel pipelined differential logic circuits, and standard 1.0 micron CMOS process.

  10. Pulse shaping for all-optical signal processing of ultra-high bit rate serial data signals

    DEFF Research Database (Denmark)

    Palushani, Evarist

    The following thesis concerns pulse shaping and optical waveform manipulation for all-optical signal processing of ultra-high bit rate serial data signals, including generation of optical pulses in the femtosecond regime, serial-to-parallel conversion and terabaud coherent optical time division...

  11. 100G Flexible IM-DD 850 nm VCSEL Transceiver with Fractional Bit Rate Using Eight-Dimensional PAM

    DEFF Research Database (Denmark)

    Lu, Xiaofeng; Lyubopytov, Vladimir; Chorchos, Łukasz

    2017-01-01

    We demonstrate a novel optical transceiver scheme with a net flexible bit rate up to 100Gbit/s with 5 Gbit/s granularity, using an eight-dimensional modulation format family, and investigate its performance on capacity, reach, and power tolerance....

  12. Reconfigurable Digital Coherent Receiver for Metro-Access Networks Supporting Mixed Modulation Formats and Bit-rates

    DEFF Research Database (Denmark)

    Caballero Jambrina, Antonio; Guerrero Gonzalez, Neil; Arlunno, Valeria

    2013-01-01

    A single, reconfigurable, digital coherent receiver is proposed and experimentally demonstrated for converged wireless and optical fiber transport. The capacity of reconstructing the full transmitted optical field allows for the demodulation of mixed modulation formats and bit-rates. We performed...

  13. A bit-rate flexible and power efficient all-optical demultiplexer realised by monolithically integrated Michelson interferometer

    DEFF Research Database (Denmark)

    Vaa, Michael; Mikkelsen, Benny; Jepsen, Kim Stokholm

    1996-01-01

    A novel bit-rate flexible and very power efficient all-optical demultiplexer using differential optical control of a monolithically integrated Michelson interferometer with MQW SOAs is demonstrated at 40 to 10 Gbit/s. Gain switched DFB lasers provide ultra stable data and control signals....

  14. BETTER FINGERPRINT IMAGE COMPRESSION AT LOWER BIT-RATES: AN APPROACH USING MULTIWAVELETS WITH OPTIMISED PREFILTER COEFFICIENTS

    Directory of Open Access Journals (Sweden)

    N R Rema

    2017-08-01

    Full Text Available In this paper, a multiwavelet based fingerprint compression technique using set partitioning in hierarchical trees (SPIHT algorithm with optimised prefilter coefficients is proposed. While wavelet based progressive compression techniques give a blurred image at lower bit rates due to lack of high frequency information, multiwavelets can be used efficiently to represent high frequency information. SA4 (Symmetric Antisymmetric multiwavelet when combined with SPIHT reduces the number of nodes during initialization to 1/4th compared to SPIHT with wavelet. This reduction in nodes leads to improvement in PSNR at lower bit rates. The PSNR can be further improved by optimizing the prefilter coefficients. In this work genetic algorithm (GA is used for optimizing prefilter coefficients. Using the proposed technique, there is a considerable improvement in PSNR at lower bit rates, compared to existing techniques in literature. An overall average improvement of 4.23dB and 2.52dB for bit rates in between 0.01 to 1 has been achieved for the images in the databases FVC 2000 DB1 and FVC 2002 DB3 respectively. The quality of the reconstructed image is better even at higher compression ratios like 80:1 and 100:1. The level of decomposition required for a multiwavelet is lesser compared to a wavelet.

  15. Time-dependent characteristic of negative feedback optical amplifier at bit rates 10-Gbit/s based on an optical triode

    Science.gov (United States)

    Harada, Yuki; Azmi, Mohamad Syafiq; Azizan, Siti Aisyah; Matsutani, Takaomi; Maeda, Yoshinobu

    2015-01-01

    We proposed and demonstrated an all-optical triode based on a tandem wavelength converter using cross-gain modulation (XGM) in semiconductor optical amplifiers (SOAs). Negative feedback optical amplification scheme, which has the key advantages of reducing bit error rate and waveform reshaping at the output, was employed in this optical triode. This scheme utilizes an input signal and a negative feedback signal (a signal with reverse intensity to the input) and they were fed together into the optical amplifier. Manipulating the intensity of negative feedback signal enabled the noise suppression effect to be optimized and the outputs recorded improvements in bit error rate (BER) and also undergone waveform reshaping shown by the eye-pattern. In negative feedback optical amplifier, the negative feedback signal and input signal were fed into the SOA. However, due to XGM mechanism, there is a setback in which both signals could not be simultaneously fed. Therefore, by using an optical delay, negative feedback timing was manipulated and we investigate timing characteristics of negative feedback optical amplifier with BER and eye-pattern waveforms at 10 Gb/s.

  16. Medication error identification rates by pharmacy, medical, and nursing students.

    Science.gov (United States)

    Warholak, Terri L; Queiruga, Caryn; Roush, Rebecca; Phan, Hanna

    2011-03-10

    To assess and compare prescribing error-identification rates by health professional students. Medical, pharmacy, and nursing students were asked to complete a questionnaire on which they evaluated the accuracy of 3 prescriptions and indicated the type of error found, if any. The number of correctly identified prescribing errors and the number of correct types of errors identified were compared and error identification rates for each group were calculated. One hundred seventy-five questionnaires were returned (87% response rate). Pharmacy students had a significantly higher error-identification rate than medical and nursing students (p medical and nursing students (p = 0.88). Compared to medical students, pharmacy students more often were able to identify correctly the error type for each prescription (p error-identification rate, which may be associated with the greater number of pharmacology and pharmacotherapeutics course hours that pharmacy students complete.

  17. Design of pseudo-symmetric high bit rate, bend insensitive optical fiber applicable for high speed FTTH

    Science.gov (United States)

    Makouei, Somayeh; Koozekanani, Z. D.

    2014-12-01

    In this paper, with sophisticated modification on modal-field distribution and introducing new design procedure, the single-mode fiber with ultra-low bending-loss and pseudo-symmetric high bit-rate of uplink and downlink, appropriate for fiber-to-the-home (FTTH) operation is presented. The bending-loss reduction and dispersion management are done by the means of Genetic Algorithm. The remarkable feature of this methodology is designing a bend-insensitive fiber without reduction of core radius and MFD. Simulation results show bending loss of 1.27×10-2 dB/turn at 1.55 μm for 5 mm curvature radius. The MFD and Aeff are 9.03 μm and 59.11 μm2. Moreover, the upstream and downstream bit-rates are approximately 2.38 Gbit/s-km and 3.05 Gbit/s-km.

  18. 45 CFR 98.100 - Error Rate Report.

    Science.gov (United States)

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... the total dollar amount of payments made in the sample); the average amount of improper payment; and... not received. (e) Costs of Preparing the Error Rate Report—Provided the error rate calculations and...

  19. Technological Advancements and Error Rates in Radiation Therapy Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States)

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  20. FODA/IBEA satellite access scheme for MIXED traffic at variable bit and coding rates system description

    OpenAIRE

    Celandroni, Nedo; Ferro, Erina; Mihal, Vlado; Potort?, Francesco

    1992-01-01

    This report describes the FODA system working at variable coding and bit rates (FODA/IBEA-TDMA) FODA/IBEA is the natural evolution of the FODA-TDMA satellite access scheme working at 2 Mbit/s fixed rate with data 1/2 coded or uncoded. FODA-TDMA was used in the European SATINE-II experiment [8]. We remind here that the term FODA/IBEA system is comprehensive of the FODA/IBEA-TDMA (1) satellite access scheme and of the hardware prototype realised by the Marconi R.C. (U.K.). Both of them come fro...

  1. Single Event Test Methodologies and System Error Rate Analysis for Triple Modular Redundant Field Programmable Gate Arrays

    Science.gov (United States)

    Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael

    2010-01-01

    We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.

  2. The Rate of Physicochemical Incompatibilities, Administration Errors. Factors Correlating with Nurses' Errors.

    Science.gov (United States)

    Fahimi, Fanak; Sefidani Forough, Aida; Taghikhani, Sepideh; Saliminejad, Leila

    2015-01-01

    Medication errors are commonly encountered in hospital setting. Intravenous medications pose particular risks because of their greater complexity and the multiple steps required in their preparation, administration and monitoring. We aimed to determine the rate of errors during the preparation and administration phase of intravenous medications and the correlation of these errors with the demographics of nurses involved in the process. One hundred patients who were receiving IV medications were monitored by a trained pharmacist. The researcher accompanied the nurses during the preparation and administration process of IV medications. Collected data were compared with the acceptable guidelines. A checklist was filled for each IV medication. Demographic data of the nurses were collected as well. A total of 454 IV medications were recorded. Inappropriate administration rate constituted a large proportion of errors in our study (35.3%). No significant or life threatening drug interaction was recorded during the study. Evaluating the impact of the nurses' demographic characteristics on the incidence of medication errors showed that there is a direct correlation between nurses' employment status and the rate of medication errors, while other characteristics did not show a significant impact on the rate of administration errors. Administration errors were significantly higher in temporary 1-year contract group than other groups (p-value < 0.0001). Study results show that there should be more vigilance on administration rate of IV medications to prevent negative consequences especially by pharmacists. Optimizing the working conditions of nurses may play a crucial role.

  3. Transmission modulation system for mobile phone reduces battery consumption without increasing bit error rate

    NARCIS (Netherlands)

    Moretti, M.; Janssen, G.J.M.

    2000-01-01

    The transmission modulation system minimizes the wasted 'out of band' power. The digital data (1) to be transmitted is fed via a pulse response filter (2) to a mixer (4) where it modulates a carrier wave (4). The digital data is also fed via a delay circuit (5) and identical filter (6) to a second

  4. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2011-06-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  5. All-optical wavelength conversion at bit rates above 10 Gb/s using semiconductor optical amplifiers

    DEFF Research Database (Denmark)

    Jørgensen, Carsten; Danielsen, Søren Lykke; Stubkjær, Kristian

    1997-01-01

    This work assesses the prospects for high-speed all-optical wavelength conversion using the simple optical interaction with the gain in semiconductor optical amplifiers (SOAs) via the interband carrier recombination. Operation and design guidelines for conversion speeds above 10 Gb/s are describe...... and the various tradeoffs are discussed. Experiments at bit rates up to 40 Gb/s are presented for both cross-gain modulation (XGM) and cross-phase modulation (XPM) in SOAs demonstrating the high-speed capability of these techniques...

  6. Development of a DMILL radhard multiplexer for the ATLAS Glink optical link and radiation test with a custom Bit ERror Tester

    CERN Document Server

    Dzahini, D

    2001-01-01

    A high speed digital optical data link has been developed for the front-end readout of the ATLAS electromagnetic calorimeter. It is based on a commercial serialiser commonly known as Glink, and a vertical cavity surface emitting laser. To be compatible with the data interface requirements, the Glink must be coupled to a radhard multiplexer that has been designed in DMILL technology to reduce the impact of neutron and gamma radiation on the link performance. This multiplexer features a very severe timing constraints related both to the front-end board output data and the Glink control and input signals. The full link has been successfully neutron and proton radiation tested by means of a custom bit error tester. (7 refs).

  7. Performance evaluations of hybrid modulation with different optical labels over PDQ in high bit-rate OLS network systems.

    Science.gov (United States)

    Xu, M; Li, Y; Kang, T Z; Zhang, T S; Ji, J H; Yang, S W

    2016-11-14

    Two orthogonal modulation optical label switching(OLS) schemes, which are based on payload of polarization multiplexing-differential quadrature phase shift keying(POLMUX-DQPSK or PDQ) modulated with identifications of duobinary (DB) label and pulse position modulation(PPM) label, are researched in high bit-rate OLS network. The BER performance of hybrid modulation with payload and label signals are discussed and evaluated in theory and simulation. The theoretical BER expressions of PDQ, PDQ-DB and PDQ-PPM are given with analysis method of hybrid modulation encoding in different the bit-rate ratios of payload and label. Theoretical derivation results are shown that the payload of hybrid modulation has a certain gain of receiver sensitivity than payload without label. The sizes of payload BER gain obtained from hybrid modulation are related to the different types of label. The simulation results are consistent with that of theoretical conclusions. The extinction ratio (ER) conflicting between hybrid encoding of intensity and phase types can be compromised and optimized in OLS system of hybrid modulation. The BER analysis method of hybrid modulation encoding in OLS system can be applied to other n-ary hybrid modulation or combination modulation systems.

  8. A robust compression system for low bit rate telemetry: Test results with lunar data

    Science.gov (United States)

    Sayood, Khalid; Rost, Martin C.

    1989-01-01

    A robust noiseless encoding scheme is presented for encoding the gamma ray spectroscopy data. The encoding algorithm is simple to implement and has minimal buffering requirements. The decoder contains error correcting capability in the form of a MAP receiver. While the MAP receiver adds some complexity, this is limited to the decoder. Nothing additional is needed at the encoder side for its functioning.

  9. Bit-padding information guided channel hopping

    KAUST Repository

    Yang, Yuli

    2011-02-01

    In the context of multiple-input multiple-output (MIMO) communications, we propose a bit-padding information guided channel hopping (BP-IGCH) scheme which breaks the limitation that the number of transmit antennas has to be a power of two based on the IGCH concept. The proposed scheme prescribes different bit-lengths to be mapped onto the indices of the transmit antennas and then uses padding technique to avoid error propagation. Numerical results and comparisons, on both the capacity and the bit error rate performances, are provided and show the advantage of the proposed scheme. The BP-IGCH scheme not only offers lower complexity to realize the design flexibility, but also achieves better performance. © 2011 IEEE.

  10. A novel unified expression for the capacity and bit error probability of wireless communication systems over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2012-07-01

    Analysis of the average binary error probabilities (ABEP) and average capacity (AC) of wireless communications systems over generalized fading channels have been considered separately in past years. This paper introduces a novel moment generating function (MGF)-based unified expression for the ABEP and AC of single and multiple link communications with maximal ratio combining. In addition, this paper proposes the hyper-Fox\\'s H fading model as a unified fading distribution of a majority of the well-known generalized fading environments. As such, the authors offer a generic unified performance expression that can be easily calculated, and that is applicable to a wide variety of fading scenarios. The mathematical formulism is illustrated with some selected numerical examples that validate the correctness of the authors\\' newly derived results. © 1972-2012 IEEE.

  11. Individual Differences and Rating Errors in First Impressions of Psychopathy

    Directory of Open Access Journals (Sweden)

    Christopher T. A. Gillen

    2016-10-01

    Full Text Available The current study is the first to investigate whether individual differences in personality are related to improved first impression accuracy when appraising psychopathy in female offenders from thin-slices of information. The study also investigated the types of errors laypeople make when forming these judgments. Sixty-seven undergraduates assessed 22 offenders on their level of psychopathy, violence, likability, and attractiveness. Psychopathy rating accuracy improved as rater extroversion-sociability and agreeableness increased and when neuroticism and lifestyle and antisocial characteristics decreased. These results suggest that traits associated with nonverbal rating accuracy or social functioning may be important in threat detection. Raters also made errors consistent with error management theory, suggesting that laypeople overappraise danger when rating psychopathy.

  12. OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS & HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION

    Energy Technology Data Exchange (ETDEWEB)

    Alan Black; Arnis Judzis

    2004-10-01

    The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit-fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all major preparations for the high pressure drilling campaign. Baker Hughes encountered difficulties in providing additional pumping capacity before TerraTek's scheduled relocation to another facility, thus the program was delayed further to accommodate the full testing program.

  13. Evaluation of soft errors rate in a commercial memory EEPROM

    International Nuclear Information System (INIS)

    Claro, Luiz H.; Silva, A.A.; Santos, Jose A.

    2011-01-01

    Soft errors are transient circuit errors caused by external radiation. When an ion intercepts a p-n region in an electronic component, the ionization produces excess charges along the track. These charges when collected can flip internal values, especially in memory cells. The problem affects not only space application but also terrestrial ones. Neutrons induced by cosmic rays and alpha particles, emitted from traces of radioactive contaminants contained in packaging and chip materials, are the predominant sources of radiation. The soft error susceptibility is different for different memory technology hence the experimental study are very important for Soft Error Rate (SER) evaluation. In this work, the methodology for accelerated tests is presented with the results for SER in a commercial electrically erasable and programmable read-only memory (EEPROM). (author)

  14. Efficient Hybrid Watermarking Scheme for Security and Transmission Bit Rate Enhancement of 3D Color-Plus-Depth Video Communication

    Science.gov (United States)

    El-Shafai, W.; El-Rabaie, S.; El-Halawany, M.; Abd El-Samie, F. E.

    2018-03-01

    Three-Dimensional Video-plus-Depth (3DV + D) comprises diverse video streams captured by different cameras around an object. Therefore, there is a great need to fulfill efficient compression to transmit and store the 3DV + D content in compressed form to attain future resource bounds whilst preserving a decisive reception quality. Also, the security of the transmitted 3DV + D is a critical issue for protecting its copyright content. This paper proposes an efficient hybrid watermarking scheme for securing the 3DV + D transmission, which is the homomorphic transform based Singular Value Decomposition (SVD) in Discrete Wavelet Transform (DWT) domain. The objective of the proposed watermarking scheme is to increase the immunity of the watermarked 3DV + D to attacks and achieve adequate perceptual quality. Moreover, the proposed watermarking scheme reduces the transmission-bandwidth requirements for transmitting the color-plus-depth 3DV over limited-bandwidth wireless networks through embedding the depth frames into the color frames of the transmitted 3DV + D. Thus, it saves the transmission bit rate and subsequently it enhances the channel bandwidth-efficiency. The performance of the proposed watermarking scheme is compared with those of the state-of-the-art hybrid watermarking schemes. The comparisons depend on both the subjective visual results and the objective results; the Peak Signal-to-Noise Ratio (PSNR) of the watermarked frames and the Normalized Correlation (NC) of the extracted watermark frames. Extensive simulation results on standard 3DV + D sequences have been conducted in the presence of attacks. The obtained results confirm that the proposed hybrid watermarking scheme is robust in the presence of attacks. It achieves not only very good perceptual quality with appreciated PSNR values and saving in the transmission bit rate, but also high correlation coefficient values in the presence of attacks compared to the existing hybrid watermarking schemes.

  15. The 95% confidence intervals of error rates and discriminant coefficients

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2015-02-01

    Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

  16. Assessment of salivary flow rate: biologic variation and measure error.

    NARCIS (Netherlands)

    Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.

    2004-01-01

    OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated

  17. Topological quantum error correction with optimal encoding rate

    International Nuclear Information System (INIS)

    Bombin, H.; Martin-Delgado, M. A.

    2006-01-01

    We prove the existence of topological quantum error correcting codes with encoding rates k/n asymptotically approaching the maximum possible value. Explicit constructions of these topological codes are presented using surfaces of arbitrary genus. We find a class of regular toric codes that are optimal. For physical implementations, we present planar topological codes

  18. Transmission Characteristics of an OFDM signal for Power Line Communication System with High Bit Rate

    Science.gov (United States)

    Mori, Akira; Watanabe, Yosuke; Tokuda, Masamitsu; Kawamoto, Koji

    In this paper, we measured what influence the sinusoidal transmission characteristics of the electric power line with various forms gave to the transmission characteristic of OFDM (Orthogonal Frequency Division Multiplexing) signal through PLC (power line communication system) modem. We classified the electric power line transmission line with various forms in a real environment into two basic elements, which are an outlet type branch and a switch type branch. Next, PHY rate (Physical rate) is measured for each basic element connected with the PLC modem. At this time, the transmission characteristics of the electric power line are simulated from measured data. OFDM sending and receiving systems are composed on the computer, and the PHY rate is simulated. By comparing with measured and calculated values, it is revealed that PHY rate of PLC modem is most affected in the case of the power line transmission characteristics having broad band and high level attenuation and group delay variation, and is not affected in the case of that having narrow band attenuation and group delay variation.

  19. Effect of the Bit Rate on the Pulses of the Laser Diodes | Ayadi ...

    African Journals Online (AJOL)

    The qualities required for Laser Diodes are their spatial and temporal coherence, and their performance in terms modulation. This paper presents the effect data rate of optical pulses delivered by diode laser using software COMSIS. Two types of modulation have been considered: direct modulation and external modulation.

  20. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  1. A Burst-Mode Photon-Counting Receiver with Automatic Channel Estimation and Bit Rate Detection

    Science.gov (United States)

    2016-02-24

    is also unattractive. In the underwater environment, the most obvious candidate for out-of-band communication is an acoustic channel1. However, the...this paper. REFERENCES [1] Farr, N., Bowen, A., Ware, J., and Pontbriand, C., “An integrated, underwater optical/ acoustic communications system...Lincoln Laboratory , 244 Wood St., Lexington, MA USA 02421 ABSTRACT We demonstrate a multi-rate burst-mode photon-counting receiver for undersea

  2. Digital sound: The selection of critical programme material and preparation of recordings for CCIR tests on low bit-rate codecs

    Science.gov (United States)

    Gilchrist, N. H. C.

    A draft of a new recommendation on low bit-rate digital audio coding for broadcasting is in preparation within CCIR Study Group 10. As part of this work, subjective tests are being conducted to determine the preferred coding systems to be used in the various applications, and at which bit rates they should be used. The BBC has been contributing to the work by conducting preliminary listening tests to select critical program material, and by preparing recordings using this material for use by the CCIR's testing centers.

  3. Forensic watermarking and bit-rate conversion of partially encrypted AAC bitstreams

    Science.gov (United States)

    Lemma, Aweke; Katzenbeisser, Stefan; Celik, Mehmet U.; Kirbiz, S.

    2008-02-01

    Electronic Music Distribution (EMD) is undergoing two fundamental shifts. The delivery over wired broadband networks to personal computers is being replaced by delivery over heterogeneous wired and wireless networks, e.g. 3G and Wi-Fi, to a range of devices such as mobile phones, game consoles and in-car players. Moreover, restrictive DRM models bound to a limited set of devices are being replaced by flexible standards-based DRM schemes and increasingly forensic tracking technologies based on watermarking. Success of these EMD services will partially depend on scalable, low-complexity and bandwidth eficient content protection systems. In this context, we propose a new partial encryption scheme for Advanced Audio Coding (AAC) compressed audio which is particularly suitable for emerging EMD applications. The scheme encrypts only the scale-factor information in the AAC bitstream with an additive one-time-pad. This allows intermediate network nodes to transcode the bitstream to lower data rates without accessing the decryption keys, by increasing the scale-factor values and re-quantizing the corresponding spectral coeficients. Furthermore, the decryption key for each user is customized such that the decryption process imprints the audio with a unique forensic tracking watermark. This constitutes a secure, low-complexity watermark embedding process at the destination node, i.e. the player. As opposed to server-side embedding methods, the proposed scheme lowers the computational burden on servers and allows for network level bandwidth saving measures such as multi-casting and caching.

  4. Does physiological hyperarousal enhance error rates among insomnia sufferers?

    Science.gov (United States)

    Edinger, Jack D; Means, Melanie K; Krystal, Andrew D

    2013-08-01

    To examine the association between physiological hyperarousal and response accuracy on reaction time tasks among individuals with insomnia. This study was conducted at affiliated Veterans Administration (VA) and academic medical centers using a matched-group, cross-sectional research design. Eighty-nine individuals (48 women) with primary insomnia, PI (MAge = 49.8 ± 17.2 y) and 95 individuals (48 women) who were well-screened normal sleepers, NS (MAge = 46.9 ± 17.0 y). Participants underwent 3 nights of polysomnography followed by daytime testing with a four-trial Multiple Sleep Latency Test (MSLT). Before each MSLT nap, they rated their sleepiness and completed computer-administered reaction time tasks. The mean number of correct and error responses made by each participant across testing trials served as dependent measures. The PI and NS groups were each subdivided into alert (e.g., MSLT mean onset latency > 8 min) and sleepy (e.g., MSLT mean onset latency ≤ 8 min) subgroups to allow for testing the main and interaction effects of participant type and level of alertness. Alert participants had longer MSLT latencies than sleepy participants (12.7 versus 5.4 min), yet both alert and sleepy individuals with PI reported greater sleepiness than NS. Alert participants also showed lower sleep efficiencies (83.5% versus 86.2%, P = 0.03), suggesting 24-h physiological hyperarousal particularly in the PI group. Individuals with PI had fewer correct responses on performance testing than did NS, whereas a significant group × alertness interaction (P = 0.0013) showed greater error rates among alert individuals with PI (mean = 4.5 ± 3.6 errors per trial) than among alert NS (mean = 2.6 ± 1.9 errors per trial). Physiological hyperarousal in insomnia may lead to more apparent daytime alertness yet dispose individuals with insomnia to higher error rates on tasks requiring their attention.

  5. Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation

    Science.gov (United States)

    Swift, G.

    2002-01-01

    JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.

  6. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  7. Optimization of Deep Drilling Performance--Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    Energy Technology Data Exchange (ETDEWEB)

    Alan Black; Arnis Judzis

    2003-10-01

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2002 through September 2002. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit--fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. Accomplishments to date include the following: 4Q 2002--Project started; Industry Team was assembled; Kick-off meeting was held at DOE Morgantown; 1Q 2003--Engineering meeting was held at Hughes Christensen, The Woodlands Texas to prepare preliminary plans for development and testing and review equipment needs; Operators started sending information regarding their needs for deep drilling challenges and priorities for large-scale testing experimental matrix; Aramco joined the Industry Team as DEA 148 objectives paralleled the DOE project; 2Q 2003--Engineering and planning for high pressure drilling at TerraTek commenced; 3Q 2003--Continuation of engineering and design work for high pressure drilling at TerraTek; Baker Hughes INTEQ drilling Fluids and Hughes Christensen commence planning for Phase 1 testing--recommendations for bits and fluids.

  8. Controlling Rater Stringency Error in Clinical Performance Rating: Further Validation of a Performance Rating Theory.

    Science.gov (United States)

    Cason, Gerald J.; And Others

    Prior research in a single clinical training setting has shown Cason and Cason's (1981) simplified model of their performance rating theory can improve rating reliability and validity through statistical control of rater stringency error. Here, the model was applied to clinical performance ratings of 14 cohorts (about 250 students and 200 raters)…

  9. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Hoskinson; R C. Rope; L G. Blackwood; R D. Lee; R K. Fink

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and a predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences

  10. Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access

    KAUST Repository

    Zafar, Ammar

    2012-12-29

    In this paper, we present an optimal resource allocation scheme (ORA) for an all-participate(AP) cognitive relay network that minimizes the symbol error rate (SER). The SER is derived and different constraints are considered on the system. We consider the cases of both individual and global power constraints, individual constraints only and global constraints only. Numerical results show that the ORA scheme outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).

  11. Intra Frame Coding In Advanced Video Coding Standard (H.264) to Obtain Consistent PSNR and Reduce Bit Rate for Diagonal Down Left Mode Using Gaussian Pulse

    Science.gov (United States)

    Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma

    2017-08-01

    Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids

  12. Adaptive Long-Term Coding of LSF Parameters Trajectories for Large-Delay/Very- to Ultra-Low Bit-Rate Speech Coding

    Directory of Open Access Journals (Sweden)

    Laurent Girin

    2010-01-01

    Full Text Available This paper presents a model-based method for coding the LSF parameters of LPC speech coders on a “long-term” basis, that is, beyond the usual 20–30 ms frame duration. The objective is to provide efficient LSF quantization for a speech coder with large delay but very- to ultra-low bit-rate (i.e., below 1 kb/s. To do this, speech is first segmented into voiced/unvoiced segments. A Discrete Cosine model of the time trajectory of the LSF vectors is then applied to each segment to capture the LSF interframe correlation over the whole segment. Bi-directional transformation from the model coefficients to a reduced set of LSF vectors enables both efficient “sparse” coding (using here multistage vector quantizers and the generation of interpolated LSF vectors at the decoder. The proposed method provides up to 50% gain in bit-rate over frame-by-frame quantization while preserving signal quality and competes favorably with 2D-transform coding for the lower range of tested bit rates. Moreover, the implicit time-interpolation nature of the long-term coding process provides this technique a high potential for use in speech synthesis systems.

  13. A digital divider with extension bits for position-sensitive detectors

    International Nuclear Information System (INIS)

    Koike, Masaki; Hasegawa, Ken-ichi

    1988-01-01

    Digitizing errors produced in a digital divider for position-sensitive detectors have been reduced by adding extension bits to data bits. A relation between the extension bits and the data bits to obtain perfect position uniformity is also given. A digital divider employing 10 bit ADCs and 6 bit extension circuits has been constructed. (orig.)

  14. Differential-phase-shift quantum key distribution experiment using fast physical random bit generator with chaotic semiconductor lasers.

    Science.gov (United States)

    Honjo, Toshimori; Uchida, Atsushi; Amano, Kazuya; Hirano, Kunihito; Someya, Hiroyuki; Okumura, Haruka; Yoshimura, Kazuyuki; Davis, Peter; Tokura, Yasuhiro

    2009-05-25

    A high speed physical random bit generator is applied for the first time to a gigahertz clocked quantum key distribution system. Random phase-modulation in a differential-phase-shift quantum key distribution (DPS-QKD) system is performed using a 1-Gbps random bit signal which is generated by a physical random bit generator with chaotic semiconductor lasers. Stable operation is demonstrated for over one hour, and sifted keys are successfully generated at a rate of 9.0 kbps with a quantum bit error rate of 3.2% after 25-km fiber transmission.

  15. Testing Theories of Transfer Using Error Rate Learning Curves.

    Science.gov (United States)

    Koedinger, Kenneth R; Yudelson, Michael V; Pavlik, Philip I

    2016-07-01

    We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions. Copyright © 2016 Cognitive Science Society, Inc.

  16. Bit and Power Loading Approach for Broadband Multi-Antenna OFDM System

    DEFF Research Database (Denmark)

    Rahman, Muhammad Imadur; Das, Suvra S.; Wang, Yuanye

    2007-01-01

    In this work, we have studied bit and power allocation strategies for multi-antenna assisted Orthogonal Frequency Division Multiplexing (OFDM) systems and investigated the impact of different rates of bit and power allocations on various multi-antenna diversity schemes. It is observed that, if we......). Otherwise, it is possible to use adaptive power distribution to save power, which can be used for other purposes, or to increase the throughput of the system by transmitting higher number of bits. We also observed that in some scenarios and in some system conditions, some form of simultaneous bit and power...... cannot find the exact Signal to Noise Ratio (SNR) thresholds due to different reasons, such as reduced Link Adaptation (LA) rate, Channel State Information (CSI) error, feedback delay etc., it is better to fix the transmit power across all sub-channels to guarantee the target Frame Error Rate (FER...

  17. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  18. Rates of computational errors for scoring the SIRS primary scales.

    Science.gov (United States)

    Tyner, Elizabeth A; Frederick, Richard I

    2013-12-01

    We entered item scores for the Structured Interview of Reported Symptoms (SIRS; Rogers, Bagby, & Dickens, 1991) into a spreadsheet and compared computed scores with those hand-tallied by examiners. We found that about 35% of the tests had at least 1 scoring error. Of SIRS scale scores tallied by examiners, about 8% were incorrectly summed. When the errors were corrected, only 1 SIRS classification was reclassified in the fourfold scheme used by the SIRS. We note that mistallied scores on psychological tests are common, and we review some strategies for reducing scale score errors on the SIRS. (c) 2013 APA, all rights reserved.

  19. Latency and bit-error-rate evaluation for radio-over-ethernet in optical fiber front-haul networks

    DEFF Research Database (Denmark)

    Sayadi, Mohammadjavad; Rodríguez, Sebastián; Olmos, Juan José Vegas

    2018-01-01

    evaluate this Ethernet packet as a case of study for RoE applications. The packet is transmitted through different fiber spans, measuring the BER and latency on each case. The system achieves BER values below the FEC limit and a manageable latency. These results serve as a guideline and proof of concept...

  20. Soft error rate estimations of the Kintex-7 FPGA within the ATLAS Liquid Argon (LAr) Calorimeter

    International Nuclear Information System (INIS)

    Wirthlin, M J; Harding, A; Takai, H

    2014-01-01

    This paper summarizes the radiation testing performed on the Xilinx Kintex-7 FPGA in an effort to determine if the Kintex-7 can be used within the ATLAS Liquid Argon (LAr) Calorimeter. The Kintex-7 device was tested with wide-spectrum neutrons, protons, heavy-ions, and mixed high-energy hadron environments. The results of these tests were used to estimate the configuration ram and block ram upset rate within the ATLAS LAr. These estimations suggest that the configuration memory will upset at a rate of 1.1 × 10 −10 upsets/bit/s and the bram memory will upset at a rate of 9.06 × 10 −11 upsets/bit/s. For the Kintex 7K325 device, this translates to 6.85 × 10 −3 upsets/device/s for configuration memory and 1.49 × 10 −3 for block memory

  1. Efficient Bit-to-Symbol Likelihood Mappings

    Science.gov (United States)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  2. Time-domain effects on error rates of multilevel digital pulse interval modulation systems

    Science.gov (United States)

    Wei, Wei; Zhang, Xiaohui; Rao, Jionghui; Pan, Chen

    2011-10-01

    A channel discretization was applied to investigate time-domain effects on error rates of Multilevel Digital Pulse Interval Modulation (MDPIM) underwater optical wireless communication systems imposed by water scattering. Taking time domain dispersion into account, package error rates of MDPIM were analyzed. The deterioration of package error rates were computed at various link ranges and transmitted rates. Theory model is an agreement with Monte Carlo simulation.

  3. Simultaneous control of error rates in fMRI data analysis.

    Science.gov (United States)

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-12-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. 45 CFR 98.102 - Content of Error Rate Reports.

    Science.gov (United States)

    2010-10-01

    ... Discretionary Funds (which includes any funds transferred from the TANF Block Grant), Mandatory and Matching Funds and State Matching and Maintenance-of-Effort (MOE Funds): (1) Percentage of cases with an error... Funds (which includes any funds transferred from the TANF Block Grant), Mandatory and Matching Funds and...

  5. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    Science.gov (United States)

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  6. Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced correlation filters for handheld devices

    Science.gov (United States)

    Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-02-01

    Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.

  7. Interest rate behaviour and the Nigerian economy: an error ...

    African Journals Online (AJOL)

    A.D. Iortyer, A Imoisi, A.I. Abuh. Abstract. Interest rate behaviour and the performance of the Nigerian economy was carried out to examine the impact of interest rate fluctuations through regulated and deregulated interest rate regimes on the economy of ... Keywords: Interest rate, credit, regulated, deregulated, performance ...

  8. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  9. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  10. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    Science.gov (United States)

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  11. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    Science.gov (United States)

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  12. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency.

    Science.gov (United States)

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 2nd graders and 974 3rd graders. Participants were assessed using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and the Woodcock Reading Mastery Test (WRMT) Passage Comprehension subtest. Results from this study further illuminate the significant relationships between error rate, oral reading fluency, and reading comprehension performance, and grade-specific guidelines for appropriate error rate levels. Low oral reading fluency and high error rates predict the level of passage comprehension performance. For second grade students below benchmark, a fall assessment error rate of 28% predicts that student comprehension performance will be below average. For third grade students below benchmark, the fall assessment cut point is 14%. Instructional implications of the findings are discussed.

  13. 8 bit computer

    OpenAIRE

    Jankovskij, Robert

    2018-01-01

    In this paper the author looks into an eight bit computer structure and the computers components, their structure, pros and cons. An eight bit computer which can execute basic instructions and arithmetic operations such as addition and subtraction of eight bit numbers is built out of integrated circuits. Data transfers between computer components are monitored and reviewed.

  14. Estimating SEE Error Rates for Complex SoCs With ASERT

    Science.gov (United States)

    Cabanas-Holmen, Manuel; Cannon, Ethan H.; Amort, Tony; Ballast, Jon; Brees, Roger

    2015-08-01

    This paper describes the ASIC Single Event Effects (SEE) Error Rate Tool (ASERT) methodology to estimate the error rates of complex System-on-Chip (SoC) devices. ASERT consists of a top-down analysis to divide the SoC into sensitive cell groups. The SEE error rate is estimated with a bottom-up calculation summing the contribution of all sensitive cell groups, including derating and utilization factors to account for the probability that a cell-level error has a SoC-level impact. The sensitive cell SEE rates are evaluated using test data from specially designed test structures. Standard rate estimation tools are augmented with novel rate estimation approaches for direct proton upsets and for spatial redundancy.

  15. Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis.

    Science.gov (United States)

    Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E

    2013-12-01

    In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.

  16. Double symbol error rates for differential detection of narrow-band FM

    Science.gov (United States)

    Simon, M. K.

    1985-01-01

    This paper evaluates the double symbol error rate (average probability of two consecutive symbol errors) in differentially detected narrow-band FM. Numerical results are presented for the special case of MSK with a Gaussian IF receive filter. It is shown that, not unlike similar results previously obtained for the single error probability of such systems, large inaccuracies in predicted performance can occur when intersymbol interference is ignored.

  17. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    Science.gov (United States)

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  18. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    Science.gov (United States)

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  19. Error rates in forensic DNA analysis: Definition, numbers, impact and communication

    NARCIS (Netherlands)

    Kloosterman, A.; Sjerps, M.; Quak, A.

    2014-01-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and

  20. Error, power, and cluster separation rates of pairwise multiple testing procedures.

    Science.gov (United States)

    Shaffer, Juliet Popper; Kowalchuk, Rhonda K; Keselman, H J

    2013-09-01

    In comparing multiple treatments, 2 error rates that have been studied extensively are the familywise and false discovery rates. Different methods are used to control each of these rates. Yet, it is rare to find studies that compare the same methods on both of these rates, and also on the per-family error rate, the expected number of false rejections. Although the per-family error rate and the familywise error rate are similar in most applications when the latter is controlled at a conventional low level (e.g., .05), the 2 measures can diverge considerably with methods that control the false discovery rate at that same level. Furthermore, we shall consider both rejections of true hypotheses (Type I errors) and rejections of false hypotheses where the observed outcomes are in the incorrect direction (Type III errors). We point out that power estimates based on the number of correct rejections do not consider the pattern of those rejections, which is important in interpreting the total outcome. The present study introduces measures of interpretability based on the pattern of separation of treatments into nonoverlapping sets and compares methods on these measures. In general, range-based (configural) methods are more likely to obtain interpretable patterns based on treatment separation than individual p-value-based measures. Recommendations for practice based on these results are given in the article. Although the article is complex, these recommendations can be understood without the necessity for detailed perusal of the supporting material.

  1. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  2. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    Science.gov (United States)

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  3. Polymerase specific error rates and profiles identified by single molecule sequencing.

    Science.gov (United States)

    Hestand, Matthew S; Van Houdt, Jeroen; Cristofoli, Francesca; Vermeesch, Joris R

    2016-01-01

    DNA polymerases have an innate error rate which is polymerase and DNA context specific. Historically the mutational rate and profiles have been measured using a variety of methods, each with their own technical limitations. Here we used the unique properties of single molecule sequencing to evaluate the mutational rate and profiles of six DNA polymerases at the sequence level. In addition to accurately determining mutations in double strands, single molecule sequencing also captures direction specific transversions and transitions through the analysis of heteroduplexes. Not only did the error rates vary, but also the direction specific transitions differed among polymerases. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Transmission Transparency and Potential Convergence of Optical Network Solutions at the Physical Layer for Bit Rates from 2.5 Gbps to 256 Gbps

    Directory of Open Access Journals (Sweden)

    Rajdi Agalliu

    2017-01-01

    Full Text Available In this paper, we investigate optical network recommendations GPON and XG-PON with triple-play services in terms of physical reach, number of subscribers, transceiver design, modulation format and implementation cost. Despite of trends to increase the bit rate from 2.5 Gbps to 10 Gbps and beyond, TDM-PONs cannot cope with bandwidth requirements of future networks. TDM and WDM techniques can be combined, resulting in improved scalability. Longer physical reach can be achieved by deploying active network elements within the transmission path. We investigate these options by considering their potential coexistence at the physical layer. Subsequently, we analyse the upgrade of optical channels to 100 Gbps and 256 Gbps by using advanced modulation formats, which combine polarization division multiplexing with coherent detection and digital signal processing. We show that PDM-QPSK format is suitable for 100 Gbps systems and PDM-16QAM is more beneficial at 256 Gbps. Simulations are performed in the OptSim software environment.

  5. Stinger Enhanced Drill Bits For EGS

    Energy Technology Data Exchange (ETDEWEB)

    Durrand, Christopher J. [Novatek International, Inc., Provo, UT (United States); Skeem, Marcus R. [Novatek International, Inc., Provo, UT (United States); Crockett, Ron B. [Novatek International, Inc., Provo, UT (United States); Hall, David R. [Novatek International, Inc., Provo, UT (United States)

    2013-04-29

    The project objectives were to design, engineer, test, and commercialize a drill bit suitable for drilling in hard rock and high temperature environments (10,000 meters) likely to be encountered in drilling enhanced geothermal wells. The goal is provide a drill bit that can aid in the increased penetration rate of three times over conventional drilling. Novatek has sought to leverage its polycrystalline diamond technology and a new conical cutter shape, known as the Stinger®, for this purpose. Novatek has developed a fixed bladed bit, known as the JackBit®, populated with both shear cutter and Stingers that is currently being tested by major drilling companies for geothermal and oil and gas applications. The JackBit concept comprises a fixed bladed bit with a center indenter, referred to as the Jack. The JackBit has been extensively tested in the lab and in the field. The JackBit has been transferred to a major bit manufacturer and oil service company. Except for the attached published reports all other information is confidential.

  6. Near-miss transcription errors: a comparison of reporting rates between a novel error-reporting mechanism and a current formal reporting system.

    Science.gov (United States)

    South, David A; Skelley, Jessica W; Dang, Mary; Woolley, Thomas

    2015-02-01

    The medication use process comprises several steps. In institutions without full implementation of computerized prescriber order entry (CPOE), transcription is a critical step in this process. As focus is increasingly placed on identifying near-miss errors, this study aimed to compare near-miss transcription error (NMTE) reporting rates between an institution's formal reporting system and an NMTE reporting mechanism. Two NMTE reporting mechanisms were assessed for 3 months. These mechanisms included the institution's formal error-reporting system and a specific transcription error queue within the institution's order imaging software. Date, patient-care unit, and type of transcription error were recorded for each order image in the transcription error queue and for each transcription error reported formally. Following data collection, reporting rates for both systems were compared. Data collection spanned 92 days and an estimated 460,000 medication orders. In total, 1,563 NMTEs were reported using the transcription error queue and 12 errors were reported via the formal reporting mechanism. Of the 1,563 errors identified via the transcription error queue, 325 (20.79%) were of an unknown type. Reporting rates (with unknown errors removed) were 0.27% and 0.0026% for the novel system and formal reporting system, respectively (P reported utilizing the novel system compared with the formal reporting system.

  7. KEAMANAN CITRA DENGAN WATERMARKING MENGGUNAKAN PENGEMBANGAN ALGORITMA LEAST SIGNIFICANT BIT

    Directory of Open Access Journals (Sweden)

    Kurniawan Kurniawan

    2015-01-01

    Full Text Available Image security is a process to save digital. One method of securing image digital is watermarking using Least Significant Bit algorithm. Main concept of image security using LSB algorithm is to replace bit value of image at specific location so that created pattern. The pattern result of replacing the bit value of image is called by watermark. Giving watermark at image digital using LSB algorithm has simple concept so that the information which is embedded will lost easily when attacked such as noise attack or compression. So need modification like development of LSB algorithm. This is done to decrease distortion of watermark information against those attacks. In this research is divided by 6 process which are color extraction of cover image, busy area search, watermark embed, count the accuracy of watermark embed, watermark extraction, and count the accuracy of watermark extraction. Color extraction of cover image is process to get blue color component from cover image. Watermark information will embed at busy area by search the area which has the greatest number of unsure from cover image. Then watermark image is embedded into cover image so that produce watermarked image using some development of LSB algorithm and search the accuracy by count the Peak Signal to Noise Ratio value. Before the watermarked image is extracted, need to test by giving noise and doing compression into jpg format. The accuracy of extraction result is searched by count the Bit Error Rate value.

  8. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  9. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    Science.gov (United States)

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  10. A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems

    Directory of Open Access Journals (Sweden)

    Dong Shi-Wei

    2007-01-01

    Full Text Available A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL test loops show that the proposed algorithm is efficient for practical DMT transmissions.

  11. A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems

    Science.gov (United States)

    Zhu, Li-Ping; Yao, Yan; Zhou, Shi-Dong; Dong, Shi-Wei

    2007-12-01

    A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT) systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL) test loops show that the proposed algorithm is efficient for practical DMT transmissions.

  12. Error rates in buccal-dental microwear quantification using scanning electron microscopy.

    Science.gov (United States)

    Galbany, J; Martínez, L M; López-Amor, H M; Espurz, V; Hiraldo, O; Romero, A; De Juan, J; Pérez-Pérez, A

    2005-01-01

    Dental microwear, usually analyzed using scanning electron microscopy (SEM) techniques, is a good indicator of the abrasive potential of past human population diets. Scanning electron microscopy secondary electrons provide excellent images of dental enamel relief for characterizing striation density, average length, and orientation. However, methodological standardization is required for interobserver comparisons since semiautomatic counting procedures are still used for micrograph characterization. The analysis of normally distributed variables allows the characterization of small interpopulation differences. However, the interobserver error rates associated with SEM experience and the degree of expertise in measuring striations are critical to population dietary interpretation. The interobserver comparisons made here clearly indicate that the precision of SEM buccal microwear measurements depends heavily on variable definition and the researcher's expertise. Moreover, error rates are not the only concern for dental microwear research. Low error rates do not guarantee that all researchers are measuring the same magnitudes of the variables considered. The results obtained show that researchers tend to maintain high intrapopulation homogeneity and low measurement error rates, whereas significant interobserver differences appear. Such differences are due to a differential interpretation of SEM microwear features and variable definitions that require detailed and precise agreement among researchers. The substitution of semiautomatic with fully automated procedures will completely avoid interobserver error rate differences.

  13. Logical error rates and resource overheads of non-transversal, magic-less gates

    Science.gov (United States)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    A non-transversal gate is required for a quantum error correcting code to perform universal computation. Gate teleportation using magic states is one way to perform the necessary operation, albeit with large overhead. Several constructions of logical gates have been proposed without magic states, but little work has been done to evaluate logical error rates and resource overheads of the gates, and compare them to magic states. In this work, we calculate logical error rates of controlled-controlled- Z (CCZ) gates on 5-qubit code and 7-qubit code implemented with the recently proposed pieceably fault-tolerant construction, which uses neither magic states nor additional ancilla qubits other than those used for error correction. Alongside transversal gates on these codes, CCZ is enough for universal computation. We also calculate the error rate of performing CCZ by state injection. Despite being much more costly in terms of space and time, state injection is no less error-prone than pieceable constructions. Our result also serves as motivation to investigate different choices of universal gate sets other than the conventional one, Clifford gates + T gate.

  14. Optimal alpha reduces error rates in gene expression studies: a meta-analysis approach.

    Science.gov (United States)

    Mudge, J F; Martyniuk, C J; Houlahan, J E

    2017-06-21

    Transcriptomic approaches (microarray and RNA-seq) have been a tremendous advance for molecular science in all disciplines, but they have made interpretation of hypothesis testing more difficult because of the large number of comparisons that are done within an experiment. The result has been a proliferation of techniques aimed at solving the multiple comparisons problem, techniques that have focused primarily on minimizing Type I error with little or no concern about concomitant increases in Type II errors. We have previously proposed a novel approach for setting statistical thresholds with applications for high throughput omics-data, optimal α, which minimizes the probability of making either error (i.e. Type I or II) and eliminates the need for post-hoc adjustments. A meta-analysis of 242 microarray studies extracted from the peer-reviewed literature found that current practices for setting statistical thresholds led to very high Type II error rates. Further, we demonstrate that applying the optimal α approach results in error rates as low or lower than error rates obtained when using (i) no post-hoc adjustment, (ii) a Bonferroni adjustment and (iii) a false discovery rate (FDR) adjustment which is widely used in transcriptome studies. We conclude that optimal α can reduce error rates associated with transcripts in both microarray and RNA-seq experiments, but point out that improved statistical techniques alone cannot solve the problems associated with high throughput datasets - these approaches need to be coupled with improved experimental design that considers larger sample sizes and/or greater study replication.

  15. Changes realized from extended bit-depth and metal artifact reduction in CT

    Energy Technology Data Exchange (ETDEWEB)

    Glide-Hurst, C.; Chen, D.; Zhong, H.; Chetty, I. J. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan 48202 (United States)

    2013-06-15

    Purpose: High-Z material in computed tomography (CT) yields metal artifacts that degrade image quality and may cause substantial errors in dose calculation. This study couples a metal artifact reduction (MAR) algorithm with enhanced 16-bit depth (vs standard 12-bit) to quantify potential gains in image quality and dosimetry. Methods: Extended CT to electron density (CT-ED) curves were derived from a tissue characterization phantom with titanium and stainless steel inserts scanned at 90-140 kVp for 12- and 16-bit reconstructions. MAR was applied to sinogram data (Brilliance BigBore CT scanner, Philips Healthcare, v.3.5). Monte Carlo simulation (MC-SIM) was performed on a simulated double hip prostheses case (Cerrobend rods embedded in a pelvic phantom) using BEAMnrc/Dosxyz (400 000 0000 histories, 6X, 10 Multiplication-Sign 10 cm{sup 2} beam traversing Cerrobend rod). A phantom study was also conducted using a stainless steel rod embedded in solid water, and dosimetric verification was performed with Gafchromic film analysis (absolute difference and gamma analysis, 2% dose and 2 mm distance to agreement) for plans calculated with Anisotropic Analytic Algorithm (AAA, Eclipse v11.0) to elucidate changes between 12- and 16-bit data. Three patients (bony metastases to the femur and humerus, and a prostate cancer case) with metal implants were reconstructed using both bit depths, with dose calculated using AAA and derived CT-ED curves. Planar dose distributions were assessed via matrix analyses and using gamma criteria of 2%/2 mm. Results: For 12-bit images, CT numbers for titanium and stainless steel saturated at 3071 Hounsfield units (HU), whereas for 16-bit depth, mean CT numbers were much larger (e.g., titanium and stainless steel yielded HU of 8066.5 {+-} 56.6 and 13 588.5 {+-} 198.8 for 16-bit uncorrected scans at 120 kVp, respectively). MC-SIM was well-matched between 12- and 16-bit images except downstream of the Cerrobend rod, where 16-bit dose was {approx}6

  16. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa

  17. Inter-track interference mitigation with two-dimensional variable equalizer for bit patterned media recording

    Directory of Open Access Journals (Sweden)

    Yao Wang

    2017-05-01

    Full Text Available The increased track density in bit patterned media recording (BPMR causes increased inter-track interference (ITI, which degrades the bit error rate (BER performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR detector and decoded with low-density parity-check (LDPC decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER compared to that with the 2D fixed equalizer.

  18. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei

    2014-06-01

    Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.

  19. Voice recognition versus transcriptionist: error rates and productivity in MRI reporting.

    Science.gov (United States)

    Strahan, Rodney H; Schneider-Kolsky, Michal E

    2010-10-01

    Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Fifty MRI reports generated by VR and 50 finalized MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Forty-two % and 30% of the finalized VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR. © 2010 The Authors. Journal of Medical Imaging and Radiation Oncology © 2010 The Royal Australian and New Zealand College of Radiologists.

  20. Voice recognition versus transcriptionist: error rated and productivity in MRI reporting

    International Nuclear Information System (INIS)

    Strahan, Rodney H.; Schneider-Kolsky, Michal E.

    2010-01-01

    Full text: Purpose: Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Methods: Fifty MRI reports generated by VR and 50 finalised MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Results: Forty-two % and 30% of the finalised VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Conclusion: Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR.

  1. Bit corruption correlation and autocorrelation in a stochastic binary nano-bit system

    Science.gov (United States)

    Sa-nguansin, Suchittra

    2014-10-01

    The corruption process of a binary nano-bit model resulting from an interaction with N stochastically-independent Brownian agents (BAs) is studied with the help of Monte-Carlo simulations and analytic continuum theory to investigate the data corruption process through the measurement of the spatial two-point correlation and the autocorrelation of bit corruption at the origin. By taking into account a more realistic correlation between bits, this work will contribute to the understanding of the soft error or the corruption of data stored in nano-scale devices.

  2. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    Science.gov (United States)

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  3. Error rates of a full-duplex system over EGK fading channels subject to laplacian interference

    KAUST Repository

    Soury, Hamza

    2017-07-31

    This paper develops a mathematical paradigm to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). Particularly, we study the dominant intra-cell interferer problem that appears between HD users scheduled on the same FD-channel. The distribution of the dominant interference is first characterized via its distribution function, which is derived in closed-form. Assuming Nakagami-m fading, the probability of error for different modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function of the signal-to-interference-ratio when compared to an idealized HD interference and noise free BS operation.

  4. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates

    Directory of Open Access Journals (Sweden)

    Berhane Yemane

    2008-03-01

    Full Text Available Abstract Background As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs. Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. Methods This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. Results The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. Conclusion The low sensitivity of parameter

  5. 12 h shifts and rates of error among nurses: A systematic review.

    Science.gov (United States)

    Clendon, Jill; Gibbons, Veronique

    2015-07-01

    To determine the effect of working 12 h or more on a single shift in an acute care hospital setting compared with working less than 12 h on rates of error among nurses. Systematic review. A three-step search strategy was utilised. An initial search of Cochrane, the Joanna Briggs Institute (JBI), MEDLINE and CINAHL was undertaken. A second search using all identified keywords and index terms was then undertaken across all included databases (Embase, Current contents, Proquest Nursing and Allied Health Source, Proquest Theses and Dissertations, Dissertation Abstracts International). Thirdly, reference lists of identified reports and articles were searched for additional studies. Studies published in English before August 2014 were included. Following review of title and abstract of 5429 publications, 26 studies were identified as meeting the inclusion criteria and selected for full retrieval and assessment for methodological quality. Of these, 13 were of sufficient quality to be included for review. Six studies reported higher rates of error for nurses working greater than 12 h on a single shift, four reported higher rates of error on shifts of up to 8 h, and three reported no difference. The six studies reporting significant rises in error rates among nurses working 12 h or more on a single shift comprised 89% of the total sample size (N=60,780 with the total sample size N=67,967). The risk of making an error appears higher among nurses working 12 h or longer on a single shift in acute care hospitals. Hospitals and units currently operating 12 h shift systems should review this scheduling practice due to the potential negative impact on patient outcomes. Further research is required to consider factors that may mitigate the risk of error where 12 h shifts are scheduled and this cannot be changed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    Science.gov (United States)

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  7. Kurzweil Reading Machine: A Partial Evaluation of Its Optical Character Recognition Error Rate.

    Science.gov (United States)

    Goodrich, Gregory L.; And Others

    1979-01-01

    A study designed to assess the ability of the Kurzweil reading machine (a speech reading device for the visually handicapped) to read three different type styles produced by five different means indicated that the machines tested had different error rates depending upon the means of producing the copy and upon the type style used. (Author/CL)

  8. Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2005-01-01

    Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...

  9. Error-rate prediction for programmable circuits: methodology, tools and studied cases

    Science.gov (United States)

    Velazco, Raoul

    2013-05-01

    This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).

  10. Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for Sub-130 nm Technologies

    Science.gov (United States)

    Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Thomas M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.

    2010-01-01

    We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.

  11. Worst-case residual clipping noise power model for bit loading in LACO-OFDM

    KAUST Repository

    Zhang, Zhenyu

    2018-03-19

    Layered ACO-OFDM enjoys better spectral efficiency than ACO-OFDM, but its performance is challenged by residual clipping noise (RCN). In this paper, the power of RCN of LACO-OFDM is analyzed and modeled. As RCN is data-dependent, the worst-case situation is considered. A worst-case indicator is defined for relating the power of RCN and the power of noise at the receiver, wherein a linear relation is shown to be a practical approximation. An LACO-OFDM bit-loading experiment is performed to examine the proposed RCN power model for data rates of 6 to 7 Gbps. The experiment\\'s results show that accounting for RCN has two advantages. First, it leads to better bit loading and achieves up to 59% lower overall bit-error rate (BER) than when the RCN is ignored. Second, it balances the BER across layers, which is a desired property from a channel coding perspective.

  12. Forward error correction based on algebraic-geometric theory

    CERN Document Server

    A Alzubi, Jafar; M Chen, Thomas

    2014-01-01

    This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.

  13. Point-of-care blood glucose measurement errors overestimate hypoglycaemia rates in critically ill patients.

    Science.gov (United States)

    Nya-Ngatchou, Jean-Jacques; Corl, Dawn; Onstad, Susan; Yin, Tom; Tylee, Tracy; Suhr, Louise; Thompson, Rachel E; Wisse, Brent E

    2015-02-01

    Hypoglycaemia is associated with morbidity and mortality in critically ill patients, and many hospitals have programmes to minimize hypoglycaemia rates. Recent studies have established the hypoglycaemic patient-day as a key metric and have published benchmark inpatient hypoglycaemia rates on the basis of point-of-care blood glucose data even though these values are prone to measurement errors. A retrospective, cohort study including all patients admitted to Harborview Medical Center Intensive Care Units (ICUs) during 2010 and 2011 was conducted to evaluate a quality improvement programme to reduce inappropriate documentation of point-of-care blood glucose measurement errors. Laboratory Medicine point-of-care blood glucose data and patient charts were reviewed to evaluate all episodes of hypoglycaemia. A quality improvement intervention decreased measurement errors from 31% of hypoglycaemic (measurement errors likely overestimates ICU hypoglycaemia rates and can be reduced by a quality improvement effort. The currently used hypoglycaemic patient-day metric does not evaluate recurrent or prolonged events that may be more likely to cause patient harm. The monitored patient-day as currently defined may not be the optimal denominator to determine inpatient hypoglycaemic risk. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.

    Science.gov (United States)

    Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda

    2017-09-01

    We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.

  15. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    Directory of Open Access Journals (Sweden)

    Wei He

    Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.

  16. Practical Relativistic Bit Commitment.

    Science.gov (United States)

    Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Wehner, S; Zbinden, H

    2015-07-17

    Bit commitment is a fundamental cryptographic primitive in which Alice wishes to commit a secret bit to Bob. Perfectly secure bit commitment between two mistrustful parties is impossible through an asynchronous exchange of quantum information. Perfect security is, however, possible when Alice and Bob each split into several agents exchanging classical information at times and locations suitably chosen to satisfy specific relativistic constraints. In this Letter we first revisit a previously proposed scheme [C. Crépeau et al., Lect. Notes Comput. Sci. 7073, 407 (2011)] that realizes bit commitment using only classical communication. We prove that the protocol is secure against quantum adversaries for a duration limited by the light-speed communication time between the locations of the agents. We then propose a novel multiround scheme based on finite-field arithmetic that extends the commitment time beyond this limit, and we prove its security against classical attacks. Finally, we present an implementation of these protocols using dedicated hardware and we demonstrate a 2 ms-long bit commitment over a distance of 131 km. By positioning the agents on antipodal points on the surface of Earth, the commitment time could possibly be extended to 212 ms.

  17. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  18. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Kugelman

    Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.

  19. Thermal-induced rate error of a fiber-optic gyroscope considering various defined factors

    Science.gov (United States)

    Zhang, Zhuo; Yu, Fei; Sun, Qian

    2017-09-01

    As a high-precision angular sensor, the interferometric fiber-optic gyroscope (FOG) usually shows high sensitivity to disturbances of the environmental temperature. To research the related influencing factors of influencing the thermal-induced rate error of an FOG is essential to enhance precision and environmental suitability. This paper starts with the factors neglected in past research to derive the thermal-induced error model of a fiber coil including various factors of equivalent radius, asymmetry of fiber tail, cross-layer leap, and so on in detail, and then translates this error into the inner product form of penalty factor matrix and temperature field matrix. Then, the mathematical model and the three-dimensional temperature field model of the fiber coil with the quadrupolar winding pattern is built, which includes the optic core, coating, glue, packing paper, and accurate temperature boundary conditions. The penalty factor matrix and temperature field matrix can be obtained from these models. Finally, the advancement of this revised the thermal-induced rate error model has been verified through simulation and experimental comparison.

  20. Symbol error rate performance evaluation of the LM37 multimegabit telemetry modulator-demodulator unit

    Science.gov (United States)

    Malek, H.

    1981-01-01

    The LM37 multimegabit telemetry modulator-demodulator unit was tested for evaluation of its symbol error rate (SER) performance. Using an automated test setup, the SER tests were carried out at various symbol rates and signal-to-noise ratios (SNR), ranging from +10 to -10 dB. With the aid of a specially designed error detector and a stabilized signal and noise summation unit, measurement of the SER at low SNR was possible. The results of the tests show that at symbol rates below 20 megasymbols per second (MS)s) and input SNR above -6 dB, the SER performance of the modem is within the specified 0.65 to 1.5 dB of the theoretical error curve. At symbol rates above 20 MS/s, the specification is met at SNR's down to -2 dB. The results of the SER tests are presented with the description of the test setup and the measurement procedure.

  1. The type I error rate for in vivo Comet assay data when the hierarchical structure is disregarded

    DEFF Research Database (Denmark)

    Hansen, Merete Kjær; Kulahci, Murat

    the type I error rate is greater than the nominal _ at 0.05. Closed-form expressions based on scaled F-distributions using the Welch-Satterthwaite approximation are provided to show how the type I error rate is aUected. With this study we hope to motivate researchers to be more precise regarding......, and this imposes considerable impact on the type I error rate. This study aims to demonstrate the implications that result from disregarding the hierarchical structure. DiUerent combinations of the factor levels as they appear in a literature study give type I error rates up to 0.51 and for all combinations...

  2. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.

    2012-12-06

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

  3. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  4. High bit depth infrared image compression via low bit depth codecs

    Science.gov (United States)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  5. Comparing Response Times and Error Rates in a Simultaneous Masking Paradigm

    Directory of Open Access Journals (Sweden)

    F Hermens

    2014-08-01

    Full Text Available In simultaneous masking, performance on a foveally presented target is impaired by one or more flanking elements. Previous studies have demonstrated strong effects of the grouping of the target and the flankers on the strength of masking (e.g., Malania, Herzog & Westheimer, 2007. These studies have predominantly examined performance by measuring offset discrimination thresholds as a measure of performance, and it is therefore unclear whether other measures of performance provide similar outcomes. A recent study, which examined the role of grouping on error rates and response times in a speeded vernier offset discrimination task, similar to that used by Malania et al. (2007, suggested a possible dissociation between the two measures, with error rates mimicking threshold performance, but response times showing differential results (Panis & Hermens, 2014. We here report the outcomes of three experiments examining this possible dissociation, and demonstrate an overall similar pattern of results for error rates and response times across a broad range of mask layouts. Moreover, the pattern of results in our experiments strongly correlates with threshold performance reported earlier (Malania et al., 2007. Our results suggest that outcomes in a simultaneous masking paradigm do not critically depend on the outcome measure used, and therefore provide evidence for a common underlying mechanism.

  6. Bit Loading Algorithms for Cooperative OFDM Systems

    Directory of Open Access Journals (Sweden)

    Gui Bo

    2008-01-01

    Full Text Available Abstract We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.

  7. High performance 14-bit pipelined redundant signed digit ADC

    Science.gov (United States)

    Narula, Swina; Pandey, Sujata

    2016-03-01

    A novel architecture of a pipelined redundant-signed-digit analog to digital converter (RSD-ADC) is presented featuring a high signal to noise ratio (SNR), spurious free dynamic range (SFDR) and signal to noise plus distortion (SNDR) with efficient background correction logic. The proposed ADC architecture shows high accuracy with a high speed circuit and efficient utilization of the hardware. This paper demonstrates the functionality of the digital correction logic of 14-bit pipelined ADC at each 1.5 bit/stage. This prototype of ADC architecture accounts for capacitor mismatch, comparator offset and finite Op-Amp gain error in the MDAC (residue amplification circuit) stages. With the proposed architecture of ADC, SNDR obtained is 85.89 dB, SNR is 85.9 dB and SFDR obtained is 102.8 dB at the sample rate of 100 MHz. This novel architecture of digital correction logic is transparent to the overall system, which is demonstrated by using 14-bit pipelined ADC. After a latency of 14 clocks, digital output will be available at every clock pulse. To describe the circuit behavior of the ADC, VHDL and MATLAB programs are used. The proposed architecture is also capable of reducing the digital hardware. Silicon area is also the complexity of the design.

  8. High performance 14-bit pipelined redundant signed digit ADC

    International Nuclear Information System (INIS)

    Narula, Swina; Pandey, Sujata

    2016-01-01

    A novel architecture of a pipelined redundant-signed-digit analog to digital converter (RSD-ADC) is presented featuring a high signal to noise ratio (SNR), spurious free dynamic range (SFDR) and signal to noise plus distortion (SNDR) with efficient background correction logic. The proposed ADC architecture shows high accuracy with a high speed circuit and efficient utilization of the hardware. This paper demonstrates the functionality of the digital correction logic of 14-bit pipelined ADC at each 1.5 bit/stage. This prototype of ADC architecture accounts for capacitor mismatch, comparator offset and finite Op-Amp gain error in the MDAC (residue amplification circuit) stages. With the proposed architecture of ADC, SNDR obtained is 85.89 dB, SNR is 85.9 dB and SFDR obtained is 102.8 dB at the sample rate of 100 MHz. This novel architecture of digital correction logic is transparent to the overall system, which is demonstrated by using 14-bit pipelined ADC. After a latency of 14 clocks, digital output will be available at every clock pulse. To describe the circuit behavior of the ADC, VHDL and MATLAB programs are used. The proposed architecture is also capable of reducing the digital hardware. Silicon area is also the complexity of the design. (paper)

  9. 32-Bit FASTBUS computer

    International Nuclear Information System (INIS)

    Blossom, J.M.; Hong, J.P.; Kellner, R.G.

    1985-01-01

    Los Alamos National Laboratory is building a 32-bit FASTBUS computer using the NATIONAL SEMICONDUCTOR 32032 central processing unit (CPU) and containing 16 million bytes of memory. The board can act both as a FASTBUS master and as a FASTBUS slave. It contains a custom direct memory access (DMA) channel which can perform 80 million bytes per second block transfers across the FASTBUS

  10. Error rates in bite mark analysis in an in vivo animal model.

    Science.gov (United States)

    Avon, S L; Victor, C; Mayhall, J T; Wood, R E

    2010-09-10

    Recent judicial decisions have specified that one foundation of reliability of comparative forensic disciplines is description of both scientific approach used and calculation of error rates in determining the reliability of an expert opinion. Thirty volunteers were recruited for the analysis of dermal bite marks made using a previously established in vivo porcine-skin model. Ten participants were recruited from three separate groups: dentists with no experience in forensics, dentists with an interest in forensic odontology, and board-certified diplomates of the American Board of Forensic Odontology (ABFO). Examiner demographics and measures of experience in bite mark analysis were collected for each volunteer. Each participant received 18 completely documented, simulated in vivo porcine bite mark cases and three paired sets of human dental models. The paired maxillary and mandibular models were identified as suspect A, suspect B, and suspect C. Examiners were tasked to determine, using an analytic method of their own choosing, whether each bite mark of the 18 bite mark cases provided was attributable to any of the suspect dentitions provided. Their findings were recorded on a standardized recording form. The results of the study demonstrated that the group of inexperienced examiners often performed as well as the board-certified group, and both inexperienced and board-certified groups performed better than those with an interest in forensic odontology that had not yet received board certification. Incorrect suspect attributions (possible false inculpation) were most common among this intermediate group. Error rates were calculated for each of the three observer groups for each of the three suspect dentitions. This study demonstrates that error rates can be calculated using an animal model for human dermal bite marks, and although clinical experience is useful, other factors may be responsible for accuracy in bite mark analysis. Further, this study demonstrates

  11. Managing the Number of Tag Bits Transmitted in a Bit-Tracking RFID Collision Resolution Protocol

    Directory of Open Access Journals (Sweden)

    Hugo Landaluce

    2014-01-01

    Full Text Available Radio Frequency Identification (RFID technology faces the problem of message collisions. The coexistence of tags sharing the communication channel degrades bandwidth, and increases the number of bits transmitted. The window methodology, which controls the number of bits transmitted by the tags, is applied to the collision tree (CT protocol to solve the tag collision problem. The combination of this methodology with the bit-tracking technology, used in CT, improves the performance of the window and produces a new protocol which decreases the number of bits transmitted. The aim of this paper is to show how the CT bit-tracking protocol is influenced by the proposed window, and how the performance of the novel protocol improves under different conditions of the scenario. Therefore, we have performed a fair comparison of the CT protocol, which uses bit-tracking to identify the first collided bit, and the new proposed protocol with the window methodology. Simulations results show that the proposed window positively decreases the total number of bits that are transmitted by the tags, and outperforms the CT protocol latency in slow tag data rate scenarios.

  12. Managing the number of tag bits transmitted in a bit-tracking RFID collision resolution protocol.

    Science.gov (United States)

    Landaluce, Hugo; Perallos, Asier; Angulo, Ignacio

    2014-01-08

    Radio Frequency Identification (RFID) technology faces the problem of message collisions. The coexistence of tags sharing the communication channel degrades bandwidth, and increases the number of bits transmitted. The window methodology, which controls the number of bits transmitted by the tags, is applied to the collision tree (CT) protocol to solve the tag collision problem. The combination of this methodology with the bit-tracking technology, used in CT, improves the performance of the window and produces a new protocol which decreases the number of bits transmitted. The aim of this paper is to show how the CT bit-tracking protocol is influenced by the proposed window, and how the performance of the novel protocol improves under different conditions of the scenario. Therefore, we have performed a fair comparison of the CT protocol, which uses bit-tracking to identify the first collided bit, and the new proposed protocol with the window methodology. Simulations results show that the proposed window positively decreases the total number of bits that are transmitted by the tags, and outperforms the CT protocol latency in slow tag data rate scenarios.

  13. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    Science.gov (United States)

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  14. Influenza infection rates, measurement errors and the interpretation of paired serology.

    Science.gov (United States)

    Cauchemez, Simon; Horby, Peter; Fox, Annette; Mai, Le Quynh; Thanh, Le Thi; Thai, Pham Quang; Hoa, Le Nguyen Minh; Hien, Nguyen Tran; Ferguson, Neil M

    2012-01-01

    Serological studies are the gold standard method to estimate influenza infection attack rates (ARs) in human populations. In a common protocol, blood samples are collected before and after the epidemic in a cohort of individuals; and a rise in haemagglutination-inhibition (HI) antibody titers during the epidemic is considered as a marker of infection. Because of inherent measurement errors, a 2-fold rise is usually considered as insufficient evidence for infection and seroconversion is therefore typically defined as a 4-fold rise or more. Here, we revisit this widely accepted 70-year old criterion. We develop a Markov chain Monte Carlo data augmentation model to quantify measurement errors and reconstruct the distribution of latent true serological status in a Vietnamese 3-year serological cohort, in which replicate measurements were available. We estimate that the 1-sided probability of a 2-fold error is 9.3% (95% Credible Interval, CI: 3.3%, 17.6%) when antibody titer is below 10 but is 20.2% (95% CI: 15.9%, 24.0%) otherwise. After correction for measurement errors, we find that the proportion of individuals with 2-fold rises in antibody titers was too large to be explained by measurement errors alone. Estimates of ARs vary greatly depending on whether those individuals are included in the definition of the infected population. A simulation study shows that our method is unbiased. The 4-fold rise case definition is relevant when aiming at a specific diagnostic for individual cases, but the justification is less obvious when the objective is to estimate ARs. In particular, it may lead to large underestimates of ARs. Determining which biological phenomenon contributes most to 2-fold rises in antibody titers is essential to assess bias with the traditional case definition and offer improved estimates of influenza ARs.

  15. On the symmetric α-stable distribution with application to symbol error rate calculations

    KAUST Repository

    Soury, Hamza

    2016-12-24

    The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.

  16. Numerical optimization of writer geometries for bit patterned magnetic recording

    Science.gov (United States)

    Kovacs, A.; Oezelt, H.; Bance, S.; Fischbacher, J.; Gusenbauer, M.; Reichel, F.; Exl, L.; Schrefl, T.; Schabes, M. E.

    2014-05-01

    A fully-automated pole-tip shape optimization tool, involving write head geometry construction, meshing, micromagnetic simulation, and evaluation, is presented. Optimizations have been performed for three different writing schemes (centered, staggered, and shingled) for an underlying bit patterned media with an areal density of 2.12 Tdots/in.2. Optimizations were performed for a single-phase media with 10 nm thickness and a mag spacing of 8 nm. From the computed write field and its gradient and the minimum energy barrier during writing for islands on the adjacent track, the overall write error rate is computed. The overall write errors are 0.7, 0.08, and 2.8×10-5 for centered writing, staggered writing, and shingled writing.

  17. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains

    Science.gov (United States)

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-01-01

    Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially

  18. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  19. Corrected multiple upsets and bit reversals for improved 1-s resolution measurements

    International Nuclear Information System (INIS)

    Brucker, G.J.; Stassinopoulos, E.G.; Stauffer, C.A.

    1994-01-01

    Previous work has studied the generation of single and multiple errors in control and irradiated static RAM samples (Harris 6504RH) which were exposed to heavy ions for relatively long intervals of time (minute), and read out only after the beam was shut off. The present investigation involved storing 4k x 1 bit maps every second during 1 min ion exposures at low flux rates of 10 3 ions/cm 2 -s in order to reduce the chance of two sequential ions upsetting adjacent bits. The data were analyzed for the presence of adjacent upset bit locations in the physical memory plane, which were previously defined to constitute multiple upsets. Improvement in the time resolution of these measurements has provided more accurate estimates of multiple upsets. The results indicate that the percentage of multiples decreased from a high of 17% in the previous experiment to less than 1% for this new experimental technique. Consecutive double and triple upsets (reversals of bits) were detected. These were caused by sequential ions hitting the same bit, with one or two reversals of state occurring in a 1-min run. In addition to these results, a status review for these same parts covering 3.5 years of imprint damage recovery is also presented

  20. Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise

    KAUST Repository

    Souri, Hamza

    2015-06-01

    The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.

  1. Creation and implementation of department-wide structured reports: an analysis of the impact on error rate in radiology reports.

    Science.gov (United States)

    Hawkins, C Matthew; Hall, Seth; Zhang, Bin; Towbin, Alexander J

    2014-10-01

    The purpose of this study was to evaluate and compare textual error rates and subtypes in radiology reports before and after implementation of department-wide structured reports. Randomly selected radiology reports that were generated following the implementation of department-wide structured reports were evaluated for textual errors by two radiologists. For each report, the text was compared to the corresponding audio file. Errors in each report were tabulated and classified. Error rates were compared to results from a prior study performed prior to implementation of structured reports. Calculated error rates included the average number of errors per report, average number of nongrammatical errors per report, the percentage of reports with an error, and the percentage of reports with a nongrammatical error. Identical versions of voice-recognition software were used for both studies. A total of 644 radiology reports were randomly evaluated as part of this study. There was a statistically significant reduction in the percentage of reports with nongrammatical errors (33 to 26%; p = 0.024). The likelihood of at least one missense omission error (omission errors that changed the meaning of a phrase or sentence) occurring in a report was significantly reduced from 3.5 to 1.2% (p = 0.0175). A statistically significant reduction in the likelihood of at least one comission error (retained statements from a standardized report that contradict the dictated findings or impression) occurring in a report was also observed (3.9 to 0.8%; p = 0.0007). Carefully constructed structured reports can help to reduce certain error types in radiology reports.

  2. Accuracy of cited "facts" in medical research articles: A review of study methodology and recalculation of quotation error rate.

    Science.gov (United States)

    Mogull, Scott A

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).

  3. Investigation of PDC bit failure base on stick-slip vibration analysis of drilling string system plus drill bit

    Science.gov (United States)

    Huang, Zhiqiang; Xie, Dou; Xie, Bing; Zhang, Wenlin; Zhang, Fuxiao; He, Lei

    2018-03-01

    The undesired stick-slip vibration is the main source of PDC bit failure, such as tooth fracture and tooth loss. So, the study of PDC bit failure base on stick-slip vibration analysis is crucial to prolonging the service life of PDC bit and improving ROP (rate of penetration). For this purpose, a piecewise-smooth torsional model with 4-DOF (degree of freedom) of drilling string system plus PDC bit is proposed to simulate non-impact drilling. In this model, both the friction and cutting behaviors of PDC bit are innovatively introduced. The results reveal that PDC bit is easier to fail than other drilling tools due to the severer stick-slip vibration. Moreover, reducing WOB (weight on bit) and improving driving torque can effectively mitigate the stick-slip vibration of PDC bit. Therefore, PDC bit failure can be alleviated by optimizing drilling parameters. In addition, a new 4-DOF torsional model is established to simulate torsional impact drilling and the effect of torsional impact on PDC bit's stick-slip vibration is analyzed by use of an engineering example. It can be concluded that torsional impact can mitigate stick-slip vibration, prolonging the service life of PDC bit and improving drilling efficiency, which is consistent with the field experiment results.

  4. Demonstration of burst mode bit discrimination circuit for 1.25 Gb/s and 10.3 Gb/s dual-rate reach extender of WDM-TDM-hybrid-PON systems based on 10G-EPON.

    Science.gov (United States)

    Cho, Seung-Hyun; Lee, Han Hyub; Kim, Kwang Ok; Lee, Jie Hyun; Myong, Seung Il; Lee, Jong Hyun; Lee, Sang Soo

    2011-12-12

    We proposed a simple and cost-effective burst mode bit discrimination circuit for dual rate reach extender based on 10 gigabit Ethernet passive optical network. To distinguish the dual rate burst mode packets, periodic idle patterns which have specific frequency components in the frequency domain and radio frequency power detection technique were used. The burst mode dual rate upstream transmission was demonstrated to confirm the feasibility of our suggested method in a coexisted gigabit Ethernet passive optical network and 10 gigabit Ethernet passive optical network. We achieved the dual rate burst mode receiver sensitivity of - 32 dBm for 1.25 Gbit/s signal and -27 dBm for 10.3 Gbit/s signal, respectively. © 2011 Optical Society of America

  5. The prevalence rates of refractive errors among children, adolescents, and adults in Germany

    Directory of Open Access Journals (Sweden)

    Sandra Jobke

    2008-10-01

    Full Text Available Sandra Jobke1, Erich Kasten2, Christian Vorwerk31Institute of Medical Psychology, 3Department of Ophthalmology, Otto-von Guericke-University of Magdeburg, Magdeburg, Germany; 2Institute of Medical Psychology, University Hospital Schleswig-Holstein, Luebeck, GermanyPurpose: The prevalence rates of myopia vary between 5% in Australian Aborigines to 84% in Hong Kong and Taiwan, 30% in Norwegian adults, and 49.5% in Swedish schoolchildren. The aim of this study was to determine the prevalence of refractive errors in German children, adolescents, and adults.Methods: The parents (aged 24–65 years and their children (516 subjects aged 2–35 years were asked to fill out a questionnaire about their refractive error and spectacle use. Emmetropia was defined as refractive status between +0.25D and –0.25D. Myopia was characterized as ≤−0.5D and hyperopia as ≥+0.5D. All information concerning refractive error were controlled by asking their opticians.Results: The prevalence rates of myopia differed significantly between all investigated age groups: it was 0% in children aged 2–6 years, 5.5% in children aged 7–11 years, 21.0% in adolescents (aged 12–17 years and 41.3% in adults aged 18–35 years (Pearson’s Chi-square, p = 0.000. Furthermore, 9.8% of children aged 2–6 years were hyperopic, 6.4% of children aged 7–11 years, 3.7% of adolescents, and 2.9% of adults (p = 0.380. The prevalence of myopia in females (23.6% was significantly higher than in males (14.6%, p = 0.018. The difference between the self-reported and the refractive error reported by their opticians was very small and was not significant (p = 0.850.Conclusion: In Germany, the prevalence of myopia seems to be somewhat lower than in Asia and Europe. There are few comparable studies concerning the prevalence rates of hyperopia.Keywords: Germany, hyperopia, incidence, myopia, prevalence

  6. Polarization-basis tracking scheme for quantum key distribution using revealed sifted key bits

    Science.gov (United States)

    Ding, Yu-Yang; Chen, Wei; Chen, Hua; Wang, Chao; li, Ya-Ping; Wang, Shuang; Yin, Zhen-Qiang; Guo, Guang-Can; Han, Zheng-Fu

    2017-03-01

    Calibration of the polarization basis between the transmitter and receiver is an important task in quantum key distribution (QKD). An effective polarization-basis tracking scheme will decrease the quantum bit error rate (QBER) and improve the efficiency of a polarization encoding QKD system. In this paper, we proposed a polarization-basis tracking scheme using only unveiled sifted key bits while performing error correction by legitimate users, rather than introducing additional reference light or interrupting the transmission of quantum signals. A polarization-encoding fiber BB84 QKD prototype was developed to examine the validity of this scheme. An average QBER of 2.32% and a standard derivation of 0.87% have been obtained during 24 hours of continuous operation.

  7. The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.

    Science.gov (United States)

    Fadaee, Shannon B; Migliaccio, Americo A

    2016-04-01

    The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation.

  8. Soft error rate simulation and initial design considerations of neutron intercepting silicon chip (NISC)

    Science.gov (United States)

    Celik, Cihangir

    Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano

  9. PS-022 Complex automated medication systems reduce medication administration error rates in an acute medical ward

    DEFF Research Database (Denmark)

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2017-01-01

    the medication administration error rate in comparison with current practice. Material and methods This was a controlled before and after study with follow-up after 7 and 14 months. The study was conducted in two acute medical hospital wards. Two automated medication systems were tested: (1) automated dispensing...... cabinet, automated dispensing and barcode medication administration; (2) non-patient specific automated dispensing and barcode medication administration. The occurrence of administration errors was observed in three 3 week periods. The error rates were calculated by dividing the number of doses with one....... The complex automated medication system effectively reduced the overall risk of administration errors in the intervention ward (OR 0.53, 95% CI 0.27–0.90), and the procedural error rate was also significantly reduced (OR 0.44, 95% CI 0.126–0.94). The non-patient specific automated medication system...

  10. Triple-Error-Correcting Codec ASIC

    Science.gov (United States)

    Jones, Robert E.; Segallis, Greg P.; Boyd, Robert

    1994-01-01

    Coder/decoder constructed on single integrated-circuit chip. Handles data in variety of formats at rates up to 300 Mbps, correcting up to 3 errors per data block of 256 to 512 bits. Helps reduce cost of transmitting data. Useful in many high-data-rate, bandwidth-limited communication systems such as; personal communication networks, cellular telephone networks, satellite communication systems, high-speed computing networks, broadcasting, and high-reliability data-communication links.

  11. Minimizing the symbol-error-rate for amplify-and-forward relaying systems using evolutionary algorithms

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2015-02-01

    In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.

  12. Analysis of family-wise error rates in statistical parametric mapping using random field theory.

    Science.gov (United States)

    Flandin, Guillaume; Friston, Karl J

    2017-11-01

    This technical report revisits the analysis of family-wise error rates in statistical parametric mapping-using random field theory-reported in (Eklund et al. []: arXiv 1511.01863). Contrary to the understandable spin that these sorts of analyses attract, a review of their results suggests that they endorse the use of parametric assumptions-and random field theory-in the analysis of functional neuroimaging data. We briefly rehearse the advantages parametric analyses offer over nonparametric alternatives and then unpack the implications of (Eklund et al. []: arXiv 1511.01863) for parametric procedures. Hum Brain Mapp, 2017. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  13. Calculation of the soft error rate of submicron CMOS logic circuits

    International Nuclear Information System (INIS)

    Juhnke, T.; Klar, H.

    1995-01-01

    A method to calculate the soft error rate (SER) of CMOS logic circuits with dynamic pipeline registers is described. This method takes into account charge collection by drift and diffusion. The method is verified by comparison of calculated SER's to measurement results. Using this method, the SER of a highly pipelined multiplier is calculated as a function of supply voltage for a 0.6 microm, 0.3 microm, and 0.12 microm technology, respectively. It has been found that the SER of such highly pipelined submicron CMOS circuits may become too high so that countermeasures have to be taken. Since the SER greatly increases with decreasing supply voltage, low-power/low-voltage circuits may show more than eight times the SER for half the normal supply voltage as compared to conventional designs

  14. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2010-10-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  15. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-06-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  16. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, D; Ehler, E [University of Minnesota, Minneapolis, MN (United States)

    2015-06-15

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing.

  17. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    International Nuclear Information System (INIS)

    Sterling, D; Ehler, E

    2015-01-01

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing

  18. Period prevalence and reporting rate of medication errors among nurses in Iran: A systematic review and meta-analysis.

    Science.gov (United States)

    Matin, Behzad Karami; Hajizadeh, Mohammad; Nouri, Bijan; Rezaeian, Shahab; Mohammadi, Masoud; Rezaei, Satar

    2018-01-22

    To estimate the 1-year period prevalence of medication errors and the reporting rate to nurse managers among nurses working in hospitals in Iran. Medication errors are one of the main factors affecting the quality of hospital services and reducing patient safety in health care systems. A literature search from Iranian and international scientific databases was developed to find relevant studies. Meta-regression was used to identify which characteristics may have a confounding effect on the pooled prevalence estimates. Based on the final 22 studies with 3556 samples, the overall estimated 1-year period prevalence of medication errors and its reporting rate to nurse managers among nurses were 53% (95% confidence interval, 41%-60%) and 36% (95% confidence interval, 23%-50%), respectively. The meta-regression analyses indicated that the sex (female/male) ratio was a statistically significant predictor of the prevalence of medication errors (p medication errors to nurse managers. The period prevalence of medication errors among nurses working in hospitals was high in Iran, whereas its reporting rate to nurse managers was low. Continuous training programmes are required to reduce and prevent medication errors among nursing staff and to improve the reporting rate to nurse managers in in Iran. © 2018 John Wiley & Sons Ltd.

  19. Quantum dynamics of quantum bits

    International Nuclear Information System (INIS)

    Nguyen, Bich Ha

    2011-01-01

    The theory of coherent oscillations of the matrix elements of the density matrix of the two-state system as a quantum bit is presented. Different calculation methods are elaborated in the case of a free quantum bit. Then the most appropriate methods are applied to the study of the density matrices of the quantum bits interacting with a classical pumping radiation field as well as with the quantum electromagnetic field in a single-mode microcavity. The theory of decoherence of a quantum bit in Markovian approximation is presented. The decoherence of a quantum bit interacting with monoenergetic photons in a microcavity is also discussed. The content of the present work can be considered as an introduction to the study of the quantum dynamics of quantum bits. (review)

  20. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    Science.gov (United States)

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ

  1. Bit-Wise Arithmetic Coding For Compression Of Data

    Science.gov (United States)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  2. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    Science.gov (United States)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  3. Measuring error rates in genomic perturbation screens: gold standards for human functional genomics.

    Science.gov (United States)

    Hart, Traver; Brown, Kevin R; Sircoulomb, Fabrice; Rottapel, Robert; Moffat, Jason

    2014-07-01

    Technological advancement has opened the door to systematic genetics in mammalian cells. Genome-scale loss-of-function screens can assay fitness defects induced by partial gene knockdown, using RNA interference, or complete gene knockout, using new CRISPR techniques. These screens can reveal the basic blueprint required for cellular proliferation. Moreover, comparing healthy to cancerous tissue can uncover genes that are essential only in the tumor; these genes are targets for the development of specific anticancer therapies. Unfortunately, progress in this field has been hampered by off-target effects of perturbation reagents and poorly quantified error rates in large-scale screens. To improve the quality of information derived from these screens, and to provide a framework for understanding the capabilities and limitations of CRISPR technology, we derive gold-standard reference sets of essential and nonessential genes, and provide a Bayesian classifier of gene essentiality that outperforms current methods on both RNAi and CRISPR screens. Our results indicate that CRISPR technology is more sensitive than RNAi and that both techniques have nontrivial false discovery rates that can be mitigated by rigorous analytical methods. © 2014 The Authors. Published under the terms of the CC BY 4.0 license.

  4. Assessment of the rate and etiology of pharmacological errors by nurses of two major teaching hospitals in Shiraz

    Directory of Open Access Journals (Sweden)

    Fatemeh Vizeshfar

    2015-06-01

    Full Text Available Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic information, self report faults, etiology of medication error and researcher observations. The results showed that nurses’ faults in pediatric wards were 51/6% and in adults wards were 47/4%. The most common faults in adults wards were later or sooner drug administration (48/6%, and administration of drugs without prescription and administering wrong drugs were the most common medication errors in pediatric wards (each one 49/2%. According to researchers’ observations, the medication error rate of 57/9% was rated low in adults wards and the rate of 69/4% in pediatric wards was rated moderate. The most frequent medication errors in both adults and pediatric wards were that nurses didn’t explain the reason and type of drug they were going to administer to patients. Independent T-test showed a significant change in faults observations in pediatric wards (p=0.000 and in adults wards (p=0.000. Several studies have shown medication errors all over the world, especially in pediatric wards. However, by designing a suitable report system and use a multi disciplinary approach, we can be reduced the occurrence of medication errors and its negative consequences.

  5. Giga-bit optical data transmission module for Beam Instrumentation

    CERN Document Server

    Roedne, L T; Cenkeramaddi, L R; Jiao, L

    Particle accelerators require electronic instrumentation for diagnostic, assessment and monitoring during operation of the transferring and circulating beams. A sensor located near the beam provides an electrical signal related to the observable quantity of interest. The front-end electronics provides analog-to-digital conversion of the quantity being observed and the generated data are to be transferred to the external digital back-end for data processing, and to display to the operators and logging. This research project investigates the feasibility of radiation-tolerant giga-bit data transmission over optic fibre for beam instrumentation applications, starting from the assessment of the state of the art technology, identification of challenges and proposal of a system level solution, which should be validated with a PCB design in an experimental setup. Radiation tolerance of 10 kGy (Si) Total Ionizing Dose (TID) over 10 years of operation, Bit Error Rate (BER) 10-6 or better. The findings and results of th...

  6. Impact of catheter reconstruction error on dose distribution in high dose rate intracavitary brachytherapy and evaluation of OAR doses

    International Nuclear Information System (INIS)

    Thaper, Deepak; Shukla, Arvind; Rathore, Narendra; Oinam, Arun S.

    2016-01-01

    In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this study is to evaluate the impact of catheter reconstruction error on dose distribution in CT based intracavitary brachytherapy planning and evaluation of its effect on organ at risk (OAR) like bladder, rectum and sigmoid and target volume High risk clinical target volume (HR-CTV)

  7. Assessment of the rate and etiology of pharmacological errors by nurses of two major teaching hospitals in Shiraz

    OpenAIRE

    Fatemeh Vizeshfar; Mozghan Rivaz; Zohreh Montaseri; Hashem Montaseri

    2015-01-01

    Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic...

  8. Analysis of bit-rock interaction during stick-slip vibrations using PDC cutting force model

    Energy Technology Data Exchange (ETDEWEB)

    Patil, P.A.; Teodoriu, C. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany). ITE

    2013-08-01

    Drillstring vibration is one of the limiting factors maximizing the drilling performance and also causes premature failure of drillstring components. Polycrystalline diamond compact (PDC) bit enhances the overall drilling performance giving the best rate of penetrations with less cost per foot but the PDC bits are more susceptible to the stick slip phenomena which results in high fluctuations of bit rotational speed. Based on the torsional drillstring model developed using Matlab/Simulink for analyzing the parametric influence on stick-slip vibrations due to drilling parameters and drillstring properties, the study of relations between weight on bit, torque on bit, bit speed, rate of penetration and friction coefficient have been analyzed. While drilling with the PDC bits, the bit-rock interaction has been characterized by cutting forces and the frictional forces. The torque on bit and the weight on bit have both the cutting component and the frictional component when resolved in horizontal and vertical direction. The paper considers that the bit is undergoing stick-slip vibrations while analyzing the bit-rock interaction of the PDC bit. The Matlab/Simulink bit-rock interaction model has been developed which gives the average cutting torque, T{sub c}, and friction torque, T{sub f}, values on cutters as well as corresponding average weight transferred by the cutting face, W{sub c}, and the wear flat face, W{sub f}, of the cutters value due to friction.

  9. Test results judgment method based on BIT faults

    Directory of Open Access Journals (Sweden)

    Wang Gang

    2015-12-01

    Full Text Available Built-in-test (BIT is responsible for equipment fault detection, so the test data correctness directly influences diagnosis results. Equipment suffers all kinds of environment stresses, such as temperature, vibration, and electromagnetic stress. As embedded testing facility, BIT also suffers from these stresses and the interferences/faults are caused, so that the test course is influenced, resulting in incredible results. Therefore it is necessary to monitor test data and judge test failures. Stress monitor and BIT self-diagnosis would redound to BIT reliability, but the existing anti-jamming researches are mainly safeguard design and signal process. This paper focuses on test results monitor and BIT equipment (BITE failure judge, and a series of improved approaches is proposed. Firstly the stress influences on components are illustrated and the effects on the diagnosis results are summarized. Secondly a composite BIT program is proposed with information integration, and a stress monitor program is given. Thirdly, based on the detailed analysis of system faults and forms of BIT results, the test sequence control method is proposed. It assists BITE failure judge and reduces error probability. Finally the validation cases prove that these approaches enhance credibility.

  10. Do remote community telepharmacies have higher medication error rates than traditional community pharmacies? Evidence from the North Dakota Telepharmacy Project.

    Science.gov (United States)

    Friesner, Daniel L; Scott, David M; Rathke, Ann M; Peterson, Charles D; Anderson, Howard C

    2011-01-01

    To evaluate the differences in medication dispensing errors between remote telepharmacy sites (pharmacist not physically present) and standard community pharmacy sites (pharmacist physically present and no telepharmacy technology; comparison group). Pilot, cross-sectional, comparison study. North Dakota from January 2005 to September 2008. Pharmacy staff at 14 remote telepharmacy sites and 8 comparison community pharmacies. The Pharmacy Quality Commitment (PQC) reporting system was incorporated into the North Dakota Telepharmacy Project. A session was conducted to train pharmacists and technicians on use of the PQC system. A quality-related event (QRE) was defined as either a near miss (i.e., mistake caught before reaching patient; pharmacy discovery), or an error (i.e., mistake discovered after patient received medication; patient discovery). QREs for prescriptions. During a 45-month period, the remote telepharmacy group reported 47,078 prescriptions and 631 QREs compared with 123,346 prescriptions and 1,002 QREs in the standard pharmacy group. Results for near misses (pharmacy discovery) and errors (patient discovery) for the remote and comparison sites were 553 and 887 and 78 and 125, respectively. Percentage of "where the mistake was caught" (i.e., pharmacist check) for the remote and comparison sites were 58% and 69%, respectively. This study reported a lower overall rate (1.0%) and a slight difference in medication dispensing error rates between remote telepharmacy sites (1.3%) and comparison sites (0.8%). Both rates are comparable with nationally reported levels (1.7% error rate for 50 pharmacies).

  11. Impact of Model Error on the Measurement of Flow Properties Needed to Describe Flow Through Porous Media La répercussion de l'erreur de modèle sur la mesure des propriétés d'un débit nécessaires pour décrire ce dernier à travers un milieu poreux

    Directory of Open Access Journals (Sweden)

    Bentsen R. G.

    2006-12-01

    Full Text Available Indirect methods are commonly employed to determine the fundamental flow properties needed to describe flow through porous media. Consequently, if one or more of the postulates underlying the mathematical description of such indirect methods is invalid, significant model error can be introduced into the measured value of the flow property. In particular, this study shows that effective mobility curves that include the effect of viscous coupling between fluid phases differ significantly from those that exclude such coupling. Moreover, it is shown that the conventional effective mobilities that pertain to steady-state, cocurrent flow, steady-state, countercurrent flow and pure countercurrent imbibition differ significantly. Thus, it appears that traditional effective mobilities are not true parameters; rather, they are infinitely nonunique. In addition, it is shown that, while neglect of hydrodynamic forces introduces a small amount of model error into the pressure difference curve for cocurrent flow in unconsolidated porous media, such neglect introduces a large amount of model error into the pressure difference curve for countercurrent flow in such porous media. Moreover, such neglect makes it difficult to explain why the pressure gradients that pertain to steady-state, countercurrent flow are opposite in sign. It is shown also that improper handling of the inlet boundary condition can introduce significant model error into the analysis. This is because, if a short core is used with one of the unsteady-state methods for determining effective mobility, it may take many pore volumes of injection before the inlet saturation rises to its maximal value, which is in contradiction with the usual assumption that the inlet saturation rises immediately to its maximal value. Finally, it is pointed out that, because of differences in flow regime and scale, the effective mobilities measured in the laboratory may not be appropriate for inclusion in the data

  12. Fast physical random bit generation with chaotic semiconductor lasers

    Science.gov (United States)

    Uchida, Atsushi; Amano, Kazuya; Inoue, Masaki; Hirano, Kunihito; Naito, Sunao; Someya, Hiroyuki; Oowada, Isao; Kurashige, Takayuki; Shiki, Masaru; Yoshimori, Shigeru; Yoshimura, Kazuyuki; Davis, Peter

    2008-12-01

    Random number generators in digital information systems make use of physical entropy sources such as electronic and photonic noise to add unpredictability to deterministically generated pseudo-random sequences. However, there is a large gap between the generation rates achieved with existing physical sources and the high data rates of many computation and communication systems; this is a fundamental weakness of these systems. Here we show that good quality random bit sequences can be generated at very fast bit rates using physical chaos in semiconductor lasers. Streams of bits that pass standard statistical tests for randomness have been generated at rates of up to 1.7 Gbps by sampling the fluctuating optical output of two chaotic lasers. This rate is an order of magnitude faster than that of previously reported devices for physical random bit generators with verified randomness. This means that the performance of random number generators can be greatly improved by using chaotic laser devices as physical entropy sources.

  13. Effects of error feedback on a nonlinear bistable system with stochastic resonance

    International Nuclear Information System (INIS)

    Li Jian-Long; Zhou Hui

    2012-01-01

    In this paper, we discuss the effects of error feedback on the output of a nonlinear bistable system with stochastic resonance. The bit error rate is employed to quantify the performance of the system. The theoretical analysis and the numerical simulation are presented. By investigating the performances of the nonlinear systems with different strengths of error feedback, we argue that the presented system may provide guidance for practical nonlinear signal processing

  14. Ultrasound and Dual-Energy X-Ray Absorptiometry Report Transcription Error Rates and Strategies for Reduction.

    Science.gov (United States)

    Bauer, Arielle; Lind, Kimberly; Van Noort, Hilary; Myers, Mallory; Borgstede, James

    2018-03-20

    Radiologists play an essential role in patient care by providing accurate and timely results. An error-free radiology report is an expectation of both patients and referring physicians. Software is currently available that can eliminate measurement and side types of errors while saving radiologists and sonographers time. The objectives of this study were to evaluate the potential reduction in report errors, estimate the potential time savings associated with implementation, and conduct a cost-benefit analysis of implementing two software programs. Data on the number of measurement errors and side errors in ultrasound and dual-energy x-ray absorptiometry reports were collected, and the time required for data entry that the software would reduce was measured by report type. Generalized estimating equations regression was used to estimate error rates and data entry times and corresponding 95% confidence intervals by report type for radiologists and sonographers. Current wages and report volumes were then applied to the time savings to estimate the annual wage savings. Projected volume increases were applied to the annual estimates to generate a 5-year savings estimate. Overall, measurement errors occurred in 6% to 28% of ultrasound reports, depending on the report type. Side errors were rare. It was estimated that over 5 years, the software could save $693,777 in radiologist wages and $130,771 in sonographer wages, a total of $824,548 (range, $621,866-$1,039,714). The use of data integration software would both significantly reduce errors in ultrasound and dual-energy x-ray absorptiometry reports and save a considerable amount of time and money. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  15. The error performance analysis over cyclic redundancy check codes

    Science.gov (United States)

    Yoon, Hee B.

    1991-06-01

    The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.

  16. Adaptive Error Resilience for Video Streaming

    Directory of Open Access Journals (Sweden)

    Lakshmi R. Siruvuri

    2009-01-01

    Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.

  17. Quantum bit commitment protocol without quantum memory

    OpenAIRE

    Ramos, Rubens Viana; Mendonca, Fabio Alencar

    2008-01-01

    Quantum protocols for bit commitment have been proposed and it is largely accepted that unconditionally secure quantum bit commitment is not possible; however, it can be more secure than classical bit commitment. In despite of its usefulness, quantum bit commitment protocols have not been experimentally implemented. The main reason is the fact that all proposed quantum bit commitment protocols require quantum memory. In this work, we show a quantum bit commitment protocol that does not requir...

  18. A 0.33 nJ/bit IEEE802.15.6/Proprietary MICS/ISM Wireless Transceiver With Scalable Data Rate for Medical Implantable Applications.

    Science.gov (United States)

    Ba, Ao; Vidojkovic, Maja; Kanda, Kouichi; Kiyani, Nauman F; Lont, Maarten; Huang, Xiongchuan; Wang, Xiaoyan; Zhou, Cui; Liu, Yao-Hong; Ding, Ming; Busze, Benjamin; Masui, Shoichi; Hamaminato, Makoto; Sato, Hiroyuki; Philips, Kathleen; de Groot, Harmke

    2015-05-01

    This paper presents an ultra-low power wireless transceiver specialized for but not limited to medical implantable applications. It operates at the 402-405-MHz medical implant communication service band, and also supports the 420-450-MHz industrial, scientific, and medical band. Being IEEE 802.15.6 standard compliant with additional proprietary modes, this highly configurable transceiver achieves date rates from 11 kb/s to 4.5 Mb/s, which covers the requirements of conventional implantable applications. The phase-locked loop-based transmitter architecture is adopted to support various modulation schemes with limited power budget. The zero-IF receiver has programmable gain and bandwidth to accommodate different operation modes. Fabricated in 40-nm CMOS technology with 1-V supply, this transceiver only consumes 1.78 mW for transmission and 1.49 mW for reception. The ultra-low power consumption together with the 15.6-compliant performance in term of modulation accuracy, sensitivity, and interference robustness make this transceiver competent for various implantable applications.

  19. Statistical mechanics of error-correcting codes

    Science.gov (United States)

    Kabashima, Y.; Saad, D.

    1999-01-01

    We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

  20. Finding the right coverage : The impact of coverage and sequence quality on single nucleotide polymorphism genotyping error rates

    NARCIS (Netherlands)

    Fountain, Emily D.; Pauli, Jonathan N.; Reid, Brendan N.; Palsboll, Per J.; Peery, M. Zachariah

    Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown.

  1. Error-rate estimation in discriminant analysis of non-linear longitudinal data: A comparison of resampling methods.

    Science.gov (United States)

    de la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Núñez-Antón, Vicente

    2018-04-01

    Consider longitudinal observations across different subjects such that the underlying distribution is determined by a non-linear mixed-effects model. In this context, we look at the misclassification error rate for allocating future subjects using cross-validation, bootstrap algorithms (parametric bootstrap, leave-one-out, .632 and [Formula: see text]), and bootstrap cross-validation (which combines the first two approaches), and conduct a numerical study to compare the performance of the different methods. The simulation and comparisons in this study are motivated by real observations from a pregnancy study in which one of the main objectives is to predict normal versus abnormal pregnancy outcomes based on information gathered at early stages. Since in this type of studies it is not uncommon to have insufficient data to simultaneously solve the classification problem and estimate the misclassification error rate, we put special attention to situations when only a small sample size is available. We discuss how the misclassification error rate estimates may be affected by the sample size in terms of variability and bias, and examine conditions under which the misclassification error rate estimates perform reasonably well.

  2. Sharp Threshold Detection Based on Sup-norm Error rates in High-dimensional Models

    DEFF Research Database (Denmark)

    Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl

    We propose a new estimator, the thresholded scaled Lasso, in high dimensional threshold regressions. First, we establish an upper bound on the sup-norm estimation error of the scaled Lasso estimator of Lee et al. (2012). This is a non-trivial task as the literature on highdimensional models has...... focused almost exclusively on estimation errors in stronger norms. We show that this sup-norm bound can be used to distinguish between zero and non-zero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent...

  3. A Holistic Approach to Bit Preservation

    DEFF Research Database (Denmark)

    Zierau, Eld Maj-Britt Olmütz

    2011-01-01

    This thesis presents three main results for a holistic approach to bit preservation, where the ultimate goal is to find the optimal bit preservation strategy for specific digital material that must be digitally preserved. Digital material consists of sequences of bits, where a bit is a binary digit...... which can have the value 0 or 1. Bit preservation must ensure that the bits remain intact and readable in the future, but bit preservation is not concerned with how bits can be interpreted as e.g. an image. A holistic approach to bit preservation includes aspects that influence the final choice of a bit...... preservation strategy. This can be aspects of how the permanent access to the digital material must be ensured. It can also be aspects of how the material must be treated as part of using it. This includes aspects related to how the digital material to be bit preserved is represented, as well as requirements...

  4. Bits and q-bits as versatility measures

    Directory of Open Access Journals (Sweden)

    José R.C. Piqueira

    2004-06-01

    Full Text Available Using Shannon information theory is a common strategy to measure any kind of variability in a signal or phenomenon. Some methods were developed to adapt information entropy measures to bird song data trying to emphasize its versatility aspect. This classical approach, using the concept of bit, produces interesting results. Now, the original idea developed in this paper is to use the quantum information theory and the quantum bit (q-bit concept in order to provide a more complete vision of the experimental results.Usar a teoria da informação de Shannon é uma estratégia comum para medir todo tipo de variabilidade em um sinal ou fenômeno. Alguns métodos foram desenvolvidos para adaptar a medida de entropia informacional a dados de cantos de pássaro, tentando enfatizar seus aspectos de versatilidade. Essa abordagem clássica, usando o conceito de bit, produz resultados interessantes. Agora, a idéia original desenvolvida neste artigo é usar a teoria quântica da informação e o conceito de q-bit, com a finalidade de proporcionar uma visão mais completa dos resultados experimentais.

  5. An Integrated Signaling-Encryption Mechanism to Reduce Error Propagation in Wireless Communications: Performance Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL

    2015-01-01

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

  6. On the symbol error rate of M-ary MPSK over generalized fading channels with additive Laplacian noise

    KAUST Repository

    Soury, Hamza

    2014-06-01

    This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox\\'s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.

  7. On the Symbol Error Rate of M-ary MPSK over Generalized Fading Channels with Additive Laplacian Noise

    KAUST Repository

    Soury, Hamza

    2015-01-07

    This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].

  8. SNP discovery in nonmodel organisms: strand bias and base-substitution errors reduce conversion rates.

    Science.gov (United States)

    Gonçalves da Silva, Anders; Barendse, William; Kijas, James W; Barris, Wes C; McWilliam, Sean; Bunch, Rowan J; McCullough, Russell; Harrison, Blair; Hoelzel, A Rus; England, Phillip R

    2015-07-01

    Single nucleotide polymorphisms (SNPs) have become the marker of choice for genetic studies in organisms of conservation, commercial or biological interest. Most SNP discovery projects in nonmodel organisms apply a strategy for identifying putative SNPs based on filtering rules that account for random sequencing errors. Here, we analyse data used to develop 4723 novel SNPs for the commercially important deep-sea fish, orange roughy (Hoplostethus atlanticus), to assess the impact of not accounting for systematic sequencing errors when filtering identified polymorphisms when discovering SNPs. We used SAMtools to identify polymorphisms in a velvet assembly of genomic DNA sequence data from seven individuals. The resulting set of polymorphisms were filtered to minimize 'bycatch'-polymorphisms caused by sequencing or assembly error. An Illumina Infinium SNP chip was used to genotype a final set of 7714 polymorphisms across 1734 individuals. Five predictors were examined for their effect on the probability of obtaining an assayable SNP: depth of coverage, number of reads that support a variant, polymorphism type (e.g. A/C), strand-bias and Illumina SNP probe design score. Our results indicate that filtering out systematic sequencing errors could substantially improve the efficiency of SNP discovery. We show that BLASTX can be used as an efficient tool to identify single-copy genomic regions in the absence of a reference genome. The results have implications for research aiming to identify assayable SNPs and build SNP genotyping assays for nonmodel organisms. © 2014 John Wiley & Sons Ltd.

  9. Rater Stringency Error in Performance Rating: A Contrast of Three Models.

    Science.gov (United States)

    Cason, Gerald J.; Cason, Carolyn L.

    The use of three remedies for errors in the measurement of ability that arise from differences in rater stringency is discussed. Models contrasted are: (1) Conventional; (2) Handicap; and (3) deterministic Rater Response Theory (RRT). General model requirements, power, bias of measures, computing cost, and complexity are contrasted. Contrasts are…

  10. Error resilient H.264/AVC Video over Satellite for low Packet Loss Rates

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren; Andersen, Jakob Dahl

    2007-01-01

    The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The ne...

  11. Forward error correction and its impact on high-data-rate, free-space laser communication system design

    Science.gov (United States)

    Hemmati, F.; Paul, D. K.; Marshalek, R. G.

    1990-07-01

    This paper discusses the use of forward error correction (FEC) in a 300 to 1000 Mbit/s free-space optical communications link. It also considers the tradeoffs involved in applying block codes or convolutional codes, emphasizing the peak and average power limitations of GaAlAs diode laser sources. Direct-detection optical receivers are assumed throughout. The application of FEC technology to a high-data-rate optical communications system is discussed, including available coding gain, correction for both random errors and mispointing-induced burst errors, and electronic implementation difficulties. This is followed by a discussion of the major system benefits derivable from FEC. Consideration is given to using the available coding gain for reducing diode laser source power, aperture size, or fine tracking accuracy. Regarding optical system design, it is most favorable to apply the coding gain toward reducing diode laser power requirements.

  12. Joint adaptive modulation and diversity combining with feedback error compensation

    KAUST Repository

    Choi, Seyeong

    2009-11-01

    This letter investigates the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of error-free feedback channels. We also propose to utilize adaptive diversity to compensate for the performance degradation due to feedback error. We accurately quantify the performance of the joint AMDC scheme in the presence of feedback error, in terms of the average number of combined paths, the average spectral efficiency, and the average bit error rate. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. It is observed that the proposed compensation strategy can offer considerable error performance improvement with little loss in processing power and spectral efficiency in comparison with the no compensation case. Copyright © 2009 IEEE.

  13. Flexible Bit Preservation on a National Basis

    DEFF Research Database (Denmark)

    Jurik, Bolette; Nielsen, Anders Bo; Zierau, Eld

    2012-01-01

    In this paper we present the results from The Danish National Bit Repository project. The project aim was establishment of a system that can offer flexible and sustainable bit preservation solutions to Danish cultural heritage institutions. Here the bit preservation solutions must include support...... of bit safety as well as other requirements like e.g. confidentiality and availability. The Danish National Bit Repository is motivated by the need to investigate and handle bit preservation for digital cultural heritage. Digital preservation relies on the integrity of the bits which digital material...

  14. On the feedback error compensation for adaptive modulation and coding scheme

    KAUST Repository

    Choi, Seyeong

    2011-11-25

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify the performance of two joint AMDC schemes in the presence of feedback error, in terms of the average spectral efficiency, the average number of combined paths, and the average bit error rate. The benefit of feedback error compensation with adaptive combining is also quantified. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. Copyright (c) 2011 John Wiley & Sons, Ltd.

  15. A Framework for Interpreting Type I Error Rates from a Product-Term Model of Interaction Applied to Quantitative Traits.

    Science.gov (United States)

    Rao, Tara J; Province, Michael A

    2016-02-01

    Adequate control of type I error rates will be necessary in the increasing genome-wide search for interactive effects on complex traits. After observing unexpected variability in type I error rates from SNP-by-genome interaction scans, we sought to characterize this variability and test the ability of heteroskedasticity-consistent standard errors to correct it. We performed 81 SNP-by-genome interaction scans using a product-term model on quantitative traits in a sample of 1,053 unrelated European Americans from the NHLBI Family Heart Study, and additional scans on five simulated datasets. We found that the interaction-term genomic inflation factor (lambda) showed inflation and deflation that varied with sample size and allele frequency; that similar lambda variation occurred in the absence of population substructure; and that lambda was strongly related to heteroskedasticity but not to minor non-normality of phenotypes. Heteroskedasticity-consistent standard errors narrowed the range of lambda, with HC3 outperforming HC0, but in individual scans tended to create new P-value outliers related to sparse two-locus genotype classes. We explain the lambda variation as a result of non-independence of test statistics coupled with stochastic biases in test statistics due to a failure of the test to reach asymptotic properties. We propose that one way to interpret lambda is by comparison to an empirical distribution generated from data simulated under the null hypothesis and without population substructure. We further conclude that the interaction-term lambda should not be used to adjust test statistics and that heteroskedasticity-consistent standard errors come with limitations that may outweigh their benefits in this setting. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.

  16. Cheat Sensitive Quantum Bit Commitment

    OpenAIRE

    Hardy, Lucien; Kent, Adrian

    1999-01-01

    We define cheat sensitive cryptographic protocols between mistrustful parties as protocols which guarantee that, if either cheats, the other has some nonzero probability of detecting the cheating. We give an example of an unconditionally secure cheat sensitive non-relativistic bit commitment protocol which uses quantum information to implement a task which is classically impossible; we also describe a simple relativistic protocol.

  17. Cheat sensitive quantum bit commitment.

    Science.gov (United States)

    Hardy, Lucien; Kent, Adrian

    2004-04-16

    We define cheat sensitive cryptographic protocols between mistrustful parties as protocols which guarantee that, if either cheats, the other has some nonzero probability of detecting the cheating. We describe an unconditionally secure cheat sensitive nonrelativistic bit commitment protocol which uses quantum information to implement a task which is classically impossible; we also describe a simple relativistic protocol.

  18. Hey! A Louse Bit Me!

    Science.gov (United States)

    ... of a sesame seed, and are tan to gray in color. Lice need to suck a tiny bit of blood to survive, and they sometimes live on people's heads and lay eggs in the hair , on the back of the neck, or behind ...

  19. Error rates of Belavkin weighted quantum measurements and a converse to Holevo's asymptotic optimality theorem

    Science.gov (United States)

    Tyson, Jon

    2009-03-01

    We compare several instances of pure-state Belavkin weighted square-root measurements from the standpoint of minimum-error discrimination of quantum states. The quadratically weighted measurement is proven superior to the so-called “pretty good measurement” (PGM) in a number of respects: (1) Holevo’s quadratic weighting unconditionally outperforms the PGM in the case of two-state ensembles, with equality only in trivial cases. (2) A converse of a theorem of Holevo is proven, showing that a weighted measurement is asymptotically optimal only if it is quadratically weighted. Counterexamples for three states are constructed. The cube-weighted measurement of Ballester, Wehner, and Winter is also considered. Sufficient optimality conditions for various weights are compared.

  20. Resident physicians' clinical training and error rate: the roles of autonomy, consultation, and familiarity with the literature.

    Science.gov (United States)

    Naveh, Eitan; Katz-Navon, Tal; Stern, Zvi

    2015-03-01

    Resident physicians' clinical training poses unique challenges for the delivery of safe patient care. Residents face special risks of involvement in medical errors since they have tremendous responsibility for patient care, yet they are novice practitioners in the process of learning and mastering their profession. The present study explores the relationships between residents' error rates and three clinical training methods (1) progressive independence or level of autonomy, (2) consulting the physician on call, and (3) familiarity with up-to-date medical literature, and whether these relationships vary among the specialties of surgery and internal medicine and between novice and experienced residents. 142 Residents in 22 medical departments from two hospitals participated in the study. Results of hierarchical linear model analysis indicated that lower levels of autonomy, higher levels of consultation with the physician on call, and higher levels of familiarity with up-to-date medical literature were associated with lower levels of resident's error rates. The associations varied between internal and surgery specializations and novice and experienced residents. In conclusion, the study results suggested that the implicit curriculum that residents should be afforded autonomy and progressive independence with nominal supervision in accordance with their relevant skills and experience must be applied cautiously depending on specialization and experience. In addition, it is necessary to create a supportive and judgment free climate within the department that may reduce a resident's hesitation to consult the attending physician.

  1. Efficient error estimation in quantum key distribution

    Science.gov (United States)

    Li, Mo; Treeviriyanupab, Patcharapong; Zhang, Chun-Mei; Yin, Zhen-Qiang; Chen, Wei; Han, Zheng-Fu

    2015-01-01

    In a quantum key distribution (QKD) system, the error rate needs to be estimated for determining the joint probability distribution between legitimate parties, and for improving the performance of key reconciliation. We propose an efficient error estimation scheme for QKD, which is called parity comparison method (PCM). In the proposed method, the parity of a group of sifted keys is practically analysed to estimate the quantum bit error rate instead of using the traditional key sampling. From the simulation results, the proposed method evidently improves the accuracy and decreases revealed information in most realistic application situations. Project supported by the National Basic Research Program of China (Grant Nos.2011CBA00200 and 2011CB921200) and the National Natural Science Foundation of China (Grant Nos.61101137, 61201239, and 61205118).

  2. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    Science.gov (United States)

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Systematic Errors of the Efficiency Tracer Technique for Measuring the Absolute Disintegration Rates of Pure Beta Emitters

    International Nuclear Information System (INIS)

    Williams, A.; Goodier, I.W.

    1967-01-01

    A basic requirement of, the theory of the efficiency tracer technique is the generally accepted assumption that there is a linear relationship between the efficiencies of the pure β-emitter and the tracer. However, an estimate of the inherent accuracy of the efficiency tracer technique has shown that, on theoretical grounds, this linear relationship would only be expected if the end-point energies and the shape of the β-spectra of the tracer and pure β-emitter were identical, the departure from linearity depending upon the ratio of the respective end-point energies. An experimentally determined value of the absolute disintegration rate of the pure emitter, obtained using a linear relationship, would have a significant systematic error if this relationship were in fact non-linear, for the usual straight-line extrapolation to 100% efficiency for the tracer would have to be replaced by an extrapolation with a significant curvature. To look for any non-linearity in the relationship it is first necessary to reduce the random measurement errors to a minimum. The first part of the paper contains a derivation of an expression for the expected value of these random errors in terms of the known statistical errors in the measurement. This expression shows that the ratio of the pure β-emitter and tracer activities can be chosen to make the random errors a minimum. The second part of the paper shows that it is possible to obtain an experimental error, which is comparable to that predicted in the expression derived above, for a pure β-emitter and tracer, combined in the same chemical form, whose end-point energies are similar (e.g. 32 P and 24 Na). To look for any non-linearity in the relationship between pure β-emitter and tracer efficiencies, 35 S (end-point energy E 0 = 168 keV) was measured with 60 Co(E 0 = 310 keV) and 134 Cs (effective E 0 = 110 keV) as tracers. The results of these measurements showed that there was a significant curvature, of opposite sign, for the

  4. Parity Bit Replenishment for JPEG 2000-Based Video Streaming

    Directory of Open Access Journals (Sweden)

    François-Olivier Devaux

    2009-01-01

    Full Text Available This paper envisions coding with side information to design a highly scalable video codec. To achieve fine-grained scalability in terms of resolution, quality, and spatial access as well as temporal access to individual frames, the JPEG 2000 coding algorithm has been considered as the reference algorithm to encode INTRA information, and coding with side information has been envisioned to refresh the blocks that change between two consecutive images of a video sequence. One advantage of coding with side information compared to conventional closed-loop hybrid video coding schemes lies in the fact that parity bits are designed to correct stochastic errors and not to encode deterministic prediction errors. This enables the codec to support some desynchronization between the encoder and the decoder, which is particularly helpful to adapt on the fly pre-encoded content to fluctuating network resources and/or user preferences in terms of regions of interest. Regarding the coding scheme itself, to preserve both quality scalability and compliance to the JPEG 2000 wavelet representation, a particular attention has been devoted to the definition of a practical coding framework able to exploit not only the temporal but also spatial correlation among wavelet subbands coefficients, while computing the parity bits on subsets of wavelet bit-planes. Simulations have shown that compared to pure INTRA-based conditional replenishment solutions, the addition of the parity bits option decreases the transmission cost in terms of bandwidth, while preserving access flexibility.

  5. Reducing Error Rates for Iris Image using higher Contrast in Normalization process

    Science.gov (United States)

    Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa

    2017-08-01

    Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.

  6. Error associated with model predictions of wildland fire rate of spread

    Science.gov (United States)

    Miguel G. Cruz; Martin E. Alexander

    2015-01-01

    How well can we expect to predict the spread rate of wildfires and prescribed fires? The degree of accuracy in model predictions of wildland fire behaviour characteristics are dependent on the model's applicability to a given situation, the validity of the model's relationships, and the reliability of the model input data (Alexander and Cruz 2013b#. We...

  7. Error performance of digital subscriber lines in the presence of impulse noise

    Science.gov (United States)

    Kerpez, Kenneth J.; Gottlieb, Albert M.

    1995-05-01

    This paper describes the error performance of the ISDN basic access digital subscriber line (DSL), the high bit rate digital subscriber line (HDSL), and the asymmetric digital subscriber line (ADSL) in the presence of impulse noise. Results are found by using data from the 1986 NYNEX impulse noise survey in simulations. It is shown that a simple uncoded ADSL would have an order of magnitude more errored seconds than DSL and HDSL.

  8. A holistic approach to bit preservation

    DEFF Research Database (Denmark)

    Zierau, Eld

    2012-01-01

    Purpose: The purpose of this paper is to point out the importance of taking a holistic approach to bit preservation when setting out to find an optimal bit preservation solution for specific digital materials. In the last decade there has been an increasing awareness that bit preservation, which...... is to keep bits intact and readable, is far more complex than first anticipated, even in this narrow definition. This paper takes a more holistic approach to bit preservation, and looks at how an optimal bit preservation strategy can be found, when requirements like confidentiality, availability and costs...... are taken into account. Design/methodology/approach: The paper describes the various findings from previous research which have led to the holistic approach to bit preservation. This paper also includes an introduction to digital preservation with a focus on the role of bit preservation, which sets...

  9. Flexible Bit Preservation on a National Basis

    DEFF Research Database (Denmark)

    Jurik, Bolette; Nielsen, Anders Bo; Zierau, Eld

    2012-01-01

    In this paper we present the results from The Danish National Bit Repository project. The project aim was establishment of a system that can offer flexible and sustainable bit preservation solutions to Danish cultural heritage institutions. Here the bit preservation solutions must include support...... of bit safety as well as other requirements like e.g. confidentiality and availability. The Danish National Bit Repository is motivated by the need to investigate and handle bit preservation for digital cultural heritage. Digital preservation relies on the integrity of the bits which digital material...... consists of, and it is with this focus that the project was initiated. This paper summarizes the requirements for a general system to offer bit preservation to cultural heritage institutions. On this basis the paper describes the resulting flexible system which can support such requirements. The paper...

  10. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  11. Accuracy of cited “facts” in medical research articles: A review of study methodology and recalculation of quotation error rate

    Science.gov (United States)

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404

  12. The prevalence rates of refractive errors among children, adolescents, and adults in Germany

    OpenAIRE

    Sandra Jobke; Erich Kasten; Christian Vorwerk

    2008-01-01

    Sandra Jobke1, Erich Kasten2, Christian Vorwerk31Institute of Medical Psychology, 3Department of Ophthalmology, Otto-von Guericke-University of Magdeburg, Magdeburg, Germany; 2Institute of Medical Psychology, University Hospital Schleswig-Holstein, Luebeck, GermanyPurpose: The prevalence rates of myopia vary between 5% in Australian Aborigines to 84% in Hong Kong and Taiwan, 30% in Norwegian adults, and 49.5% in Swedish schoolchildren. The aim of this study was to determine the prevalence of ...

  13. A web-based team-oriented medical error communication assessment tool: development, preliminary reliability, validity, and user ratings.

    Science.gov (United States)

    Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas

    2011-01-01

    Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.

  14. Head and bit patterned media optimization at areal densities of 2.5 Tbit/in2 and beyond

    International Nuclear Information System (INIS)

    Bashir, M.A.; Schrefl, T.; Dean, J.; Goncharov, A.; Hrkac, G.; Allwood, D.A.; Suess, D.

    2012-01-01

    Global optimization of writing head is performed using micromagnetics and surrogate optimization. The shape of the pole tip is optimized for bit patterned, exchange spring recording media. The media characteristics define the effective write field and the threshold values for the head field that acts at islands in the adjacent track. Once the required head field characteristics are defined, the pole tip geometry is optimized in order to achieve a high gradient of the effective write field while keeping the write field at the adjacent track below a given value. We computed the write error rate and the adjacent track erasure for different maximum anisotropy in the multilayer, graded media. The results show a linear trade off between the error rate and the number of passes before erasure. For optimal head media combinations we found a bit error rate of 10 -6 with 10 8 pass lines before erasure at 2.5 Tbit/in 2 . - Research Highlights: → Global optimization of writing head is performed using micromagnetics and surrogate optimization. → A method is provided to optimize the pole tip shape while maintaining the head field that acts in the adjacent tracks. → Patterned media structures providing an area density of 2.5 Tbit/in 2 are discussed as a case study. → Media reliability is studied, while taking into account, the magnetostatic field interactions from neighbouring islands and adjacent track erasure under the influence of head field.

  15. Capped bit patterned media for high density magnetic recording

    Science.gov (United States)

    Li, Shaojing; Livshitz, Boris; Bertram, H. Neal; Inomata, Akihiro; Fullerton, Eric E.; Lomakin, Vitaliy

    2009-04-01

    A capped composite patterned medium design is described which comprises an array of hard elements exchange coupled to a continuous cap layer. The role of the cap layer is to lower the write field of the individual hard element and introduce ferromagnetic exchange interactions between hard elements to compensate the magnetostatic interactions. Modeling results show significant reduction in the reversal field distributions caused by the magnetization states in the array which is important to prevent bit errors and increase achievable recording densities.

  16. Zooplankton filtering rates: error due to loss of radioisotopic label in chemically preserved samples

    International Nuclear Information System (INIS)

    Holtby, L.B.; Knoechel, R.

    1981-01-01

    Zooplankton fed 32 P-labeled yeast or 14 C-labeled algae were preserved with Formalin, ethanol, or Lugol's iodine and the subsequent loss of labeled materials was followed by analysis of sample filtrates. The commonly used combination of 32 P-labeled yeast and Formalin preservation produced maximal loss in both magnitude and duration, reaching a value of 73% loss after 3 days; ethanol preservation resulted in only 5% loss for the same food. Lugol's iodine yielded the best results for animals fed 14 C-labeled algae, resulting in a 40% loss that stabilized within 3 h. Nonchemical preservation (heat-killing and drying) produced filtering rates comparable with those of the best chemical preservative

  17. Structure analysis of tax revenue and inflation rate in Banda Aceh using vector error correction model with multiple alpha

    Science.gov (United States)

    Sofyan, Hizir; Maulia, Eva; Miftahuddin

    2017-11-01

    A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).

  18. The dynamic effect of exchange-rate volatility on Turkish exports: Parsimonious error-correction model approach

    Directory of Open Access Journals (Sweden)

    Demirhan Erdal

    2015-01-01

    Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.

  19. Development of a jet-assisted polycrystalline diamond drill bit

    Energy Technology Data Exchange (ETDEWEB)

    Pixton, D.S.; Hall, D.R.; Summers, D.A.; Gertsch, R.E.

    1997-12-31

    A preliminary investigation has been conducted to evaluate the technical feasibility and potential economic benefits of a new type of drill bit. This bit transmits both rotary and percussive drilling forces to the rock face, and augments this cutting action with high-pressure mud jets. Both the percussive drilling forces and the mud jets are generated down-hole by a mud-actuated hammer. Initial laboratory studies show that rate of penetration increases on the order of a factor of two over unaugmented rotary and/or percussive drilling rates are possible with jet-assistance.

  20. Error rate on the director's task is influenced by the need to take another's perspective but not the type of perspective.

    Science.gov (United States)

    Legg, Edward W; Olivier, Laure; Samuel, Steven; Lurz, Robert; Clayton, Nicola S

    2017-08-01

    Adults are prone to responding erroneously to another's instructions based on what they themselves see and not what the other person sees. Previous studies have indicated that in instruction-following tasks participants make more errors when required to infer another's perspective than when following a rule. These inference-induced errors may occur because the inference process itself is error-prone or because they are a side effect of the inference process. Crucially, if the inference process is error-prone, then higher error rates should be found when the perspective to be inferred is more complex. Here, we found that participants were no more error-prone when they had to judge how an item appeared (Level 2 perspective-taking) than when they had to judge whether an item could or could not be seen (Level 1 perspective-taking). However, participants were more error-prone in the perspective-taking variants of the task than in a version that only required them to follow a rule. These results suggest that having to represent another's perspective induces errors when following their instructions but that error rates are not directly linked to errors in inferring another's perspective.

  1. Impact of automated dispensing cabinets on medication selection and preparation error rates in an emergency department: a prospective and direct observational before-and-after study.

    Science.gov (United States)

    Fanning, Laura; Jones, Nick; Manias, Elizabeth

    2016-04-01

    The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.

  2. Dispersion Tolerance of 40 Gbaud Multilevel Modulation Formats with up to 3 bits per Symbol

    DEFF Research Database (Denmark)

    Jensen, Jesper Bevensee; Tokle, Torger; Geng, Yan

    2006-01-01

    We present numerical and experimental investigations of dispersion tolerance for multilevel phase- and amplitude modulation with up to 3 bits per symbol at a symbol rate of 40 Gbaud......We present numerical and experimental investigations of dispersion tolerance for multilevel phase- and amplitude modulation with up to 3 bits per symbol at a symbol rate of 40 Gbaud...

  3. Modeling for write synchronization in bit patterned media recording

    Science.gov (United States)

    Lin, Maria Yu; Chan, Kheong Sann; Chua, Melissa; Zhang, Songhua; Kui, Cai; Elidrissi, Moulay Rachid

    2012-04-01

    Bit patterned media recording (BPMR) is a contender for next generation technology after conventional granular magnetic recording (CGMR) can no longer sustain the continued areal density growth. BPMR has several technological hurdles that need to be overcome, among them is solving the problem of write synchronization. With CGMR, grains are randomly distributed and occur almost all over the media. In contrast, BPMR has grains patterned into a regular lattice on the media with an approximate 50% duty cycle. Hence only about a quarter of the area is filled with magnetic material. During writing, the clock must be synchronized to the islands or the written in error rate becomes unacceptably large and the system fails. Maintaining synchronization during writing is a challenge as the system is not able to read and write simultaneously. Hence reading must occur periodically between the writing frequently enough to re-synchronize the writing clock to the islands. In this work, we study the requirements on the lengths of the synchronization and data sectors in a BPMR system using an advanced model for BPMR, and taking into consideration different spindle motor speed variations, which is the main cause of the mis-synchronization.

  4. Measurement properties of visual rating of postural orientation errors of the lower extremity - A systematic review and meta-analysis.

    Science.gov (United States)

    Nae, Jenny; Creaby, Mark W; Cronström, Anna; Ageberg, Eva

    2017-09-01

    To systematically review measurement properties of visual assessment and rating of Postural Orientation Errors (POEs) in participants with or without lower extremity musculoskeletal disorders. A systematic review according to the PRISMA guidelines was conducted. The search was performed in Medline (Pubmed), CINAHL and EMBASE (OVID) databases until August 2016. Studies reporting measurement properties for visual rating of postural orientation during the performance of weight-bearing functional tasks were included. No limits were placed on participant age, sex or whether they had a musculoskeletal disorder affecting the lower extremity. Twenty-eight articles were included, 5 of which included populations with a musculoskeletal disorder. Visual rating of the knee-medial-to-foot position (KMFP) was reliable within and between raters, and meta-analyses showed that this POE was valid against 2D and 3D kinematics in asymptomatic populations. Other segment-specific POEs showed either poor to moderate reliability or there were too few studies to permit synthesis. Intra-rater reliability was at least moderate for POEs within a task whereas inter-rater reliability was at most moderate. Visual rating of KMFP appears to be valid and reliable in asymptomatic adult populations. Measurement properties remain to be determined for POEs other than KMPF. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Comparison of the effect of paper and computerized procedures on operator error rate and speed of performance

    International Nuclear Information System (INIS)

    Converse, S.A.; Perez, P.B.; Meyer, S.; Crabtree, W.

    1994-01-01

    The Computerized Procedures Manual (COPMA-II) is an advanced procedure manual that can be used to select and execute procedures, to monitor the state of plant parameters, and to help operators track their progress through plant procedures. COPMA-II was evaluated in a study that compared the speed and accuracy of operators' performance when they performed with COPMA-II and traditional paper procedures. Sixteen licensed reactor operators worked in teams of two to operate the Scales Pressurized Water Reactor Facility at North Carolina State University. Each team performed one change of power with each type of procedure to simulate performance under normal operating conditions. Teams then performed one accident scenario with COPMA-II and one with paper procedures. Error rates, performance times, and subjective estimates of workload were collected, and were evaluated for each combination of procedure type and scenario type. For the change of power task, accuracy and response time were not different for COPMA-II and paper procedures. Operators did initiate responses to both accident scenarios fastest with paper procedures. However, procedure type did not moderate response completion time for either accident scenario. For accuracy, performance with paper procedures resulted in twice as many errors as did performance with COPMA-II. Subjective measures of mental workload for the accident scenarios were not affected by procedure type

  6. Inflation of type I error rates by unequal variances associated with parametric, nonparametric, and Rank-Transformation Tests

    Directory of Open Access Journals (Sweden)

    Donald W. Zimmerman

    2004-01-01

    Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.

  7. A Novel Digital Background Calibration Technique for 16 bit SHA-less Multibit Pipelined ADC

    Directory of Open Access Journals (Sweden)

    Swina Narula

    2016-01-01

    Full Text Available In this paper, a high resolution of 16 bit and high speed of 125MS/s, multibit Pipelined ADC with digital background calibration is presented. In order to achieve low power, SHA-less front end is used with multibit stages. The first and second stages are used here as a 3.5 bit and the stages from third to seventh are of 2.5 bit and last stage is of 3-bit flash ADC. After bit alignment and truncation of total 19 bits, 16 bits are used as final digital output. To precise the remove linear gain error of the residue amplifier and capacitor mismatching error, a digital background calibration technique is used, which is a combination of signal dependent dithering (SDD and butterfly shuffler. To improve settling time of residue amplifier, a special circuit of voltage separation is used. With the proposed digital background calibration technique, the spurious-free dynamic range (SFDR has been improved to 97.74 dB @30 MHz and 88.9 dB @150 MHz, and the signal-to-noise and distortion ratio (SNDR has been improved to 79.77 dB @ 30 MHz, and 73.5 dB @ 150 MHz. The implementation of the Pipelined ADC has been completed with technology parameters of 0.18μm CMOS process with 1.8 V supply. Total power consumption is 300 mW by the proposed ADC.

  8. Some observations on the Bit-Search Generator

    OpenAIRE

    Mitchell, Chris J.

    2005-01-01

    In this short note an alternative definition of the Bit-Search Generator (BSG) is provided. This leads to a discussion of both the security of the BSG and ways in which it might be modified to either improve its rate or increase its security.

  9. Tb/s physical random bit generation with bandwidth-enhanced chaos in three-cascaded semiconductor lasers.

    Science.gov (United States)

    Sakuraba, Ryohsuke; Iwakawa, Kento; Kanno, Kazutaka; Uchida, Atsushi

    2015-01-26

    We experimentally demonstrate fast physical random bit generation from bandwidth-enhanced chaos by using three-cascaded semiconductor lasers. The bandwidth-enhanced chaos is obtained with the standard bandwidth of 35.2 GHz, the effective bandwidth of 26.0 GHz and the flatness of 5.6 dB, whose waveform is used for random bit generation. Two schemes of single-bit and multi-bit extraction methods for random bit generation are carried out to evaluate the entropy rate and the maximum random bit generation rate. For single-bit generation, the generation rate at 20 Gb/s is obtained for physical random bit sequences. For multi-bit generation, the maximum generation rate at 1.2 Tb/s ( = 100 GS/s × 6 bits × 2 data) is equivalently achieved for physical random bit sequences whose randomness is verified by using both NIST Special Publication 800-22 and TestU01.

  10. In bits, bytes and stone

    DEFF Research Database (Denmark)

    Sabra, Jakob Borrits; Andersen, Hans Jørgen

    designs'. Urns, coffins, graves, cemeteries, memorials, monuments, websites, applications and software services, whether cut in stone or made of bits, are all influenced by discourses of publics, economics, power, technology and culture. Designers, programmers, stakeholders and potential end-users often....... The findings in this paper are contextualized through a qualitative ethnographic research design based on Danish cemetery users and mourners and their different experiences with and attitudes towards new online grief, mourning and remembrance designs, platforms, services and initiatives. Additionally...... constitute parts of an intricately weaved and interrelated network of practices dealing with death, mourning, memorialization and remembrance. Design pioneering company IDEO'S recent failed attempt to 'redesign death' is an example of how delicate and difficult it is to work with digital and symbolic 'death...

  11. FastBit Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng

    2007-08-02

    An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. The compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.

  12. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Directory of Open Access Journals (Sweden)

    Philip J Kellman

    Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert

  13. Bit and Power Loading Approach for Broadband Multi-Antenna OFDM System

    DEFF Research Database (Denmark)

    Rahman, Muhammad Imadur; Das, Suvra S.; Wang, Yuanye

    2007-01-01

    In this work, we have studied bit and power allocation strategies for multi-antenna assisted Orthogonal Frequency Division Multiplexing (OFDM) systems and investigated the impact of different rates of bit and power allocations on various multi-antenna diversity schemes. It is observed that, if we...... allocations across OFDM sub-channels are required together for efficient exploitation of wireless channel....

  14. CAMAC based 4-channel 12-bit digitizer

    International Nuclear Information System (INIS)

    Srivastava, Amit K; Sharma, Atish; Raval, Tushar; Reddy, D Chenna

    2010-01-01

    With the development in Fusion research a large number of diagnostics are being used to understand the complex behaviour of plasma. During discharge, several diagnostics demand high sampling rate and high bit resolution to acquire data for rapid changes in plasma parameters. For the requirements of such fast diagnostics, a 4-channel simultaneous sampling, high-speed, 12-bit CAMAC digitizer has been designed and developed which has several important features for application in CAMAC based nuclear instrumentation. The module has independent ADC per channel for simultaneous sampling and digitization, and 512 Ksamples RAM per channel for on-board storage. The digitizer has been designed for event based acquisition and the acquisition window gives post-trigger as well as pre-trigger (software selectable) data that is useful for analysis. It is a transient digitizer and can be operated either in pre/post trigger mode or in burst mode. The record mode and the active memory size are selected through software commands to satisfy the current application. The module can be used to acquire data at high sampling rate for short time discharge e.g. 512 ms at 1MSPS. The module can also be used for long time discharge at low sampling rate e.g. 512 seconds at 1KSPS. This paper describes the design of digitizer module, development of VHDL code for hardware logic, Graphical User Interface (GUI) and important features of module from application point of view. The digitizer has CPLD based hardware logic, which provides flexibility in configuring the module for different sampling rates and different pre/post trigger samples through GUI. The digitizer can be operated with either internal (for testing/acquisition) or external (synchronized acquisition) clock and trigger. The digitizer has differential inputs with bipolar input range ±5V and it is being used with sampling rate of 1 MSamples Per Second (MSPS) per channel but it also supports higher sampling rate up to 3MSPS per channel. A

  15. Beamforming under Quantization Errors in Wireless Binaural Hearing Aids

    Directory of Open Access Journals (Sweden)

    Srinivasan Sriram

    2008-01-01

    Full Text Available Improving the intelligibility of speech in different environments is one of the main objectives of hearing aid signal processing algorithms. Hearing aids typically employ beamforming techniques using multiple microphones for this task. In this paper, we discuss a binaural beamforming scheme that uses signals from the hearing aids worn on both the left and right ears. Specifically, we analyze the effect of a low bit rate wireless communication link between the left and right hearing aids on the performance of the beamformer. The scheme is comprised of a generalized sidelobe canceller (GSC that has two inputs: observations from one ear, and quantized observations from the other ear, and whose output is an estimate of the desired signal. We analyze the performance of this scheme in the presence of a localized interferer as a function of the communication bit rate using the resultant mean-squared error as the signal distortion measure.

  16. Beamforming under Quantization Errors in Wireless Binaural Hearing Aids

    Directory of Open Access Journals (Sweden)

    Kees Janse

    2008-09-01

    Full Text Available Improving the intelligibility of speech in different environments is one of the main objectives of hearing aid signal processing algorithms. Hearing aids typically employ beamforming techniques using multiple microphones for this task. In this paper, we discuss a binaural beamforming scheme that uses signals from the hearing aids worn on both the left and right ears. Specifically, we analyze the effect of a low bit rate wireless communication link between the left and right hearing aids on the performance of the beamformer. The scheme is comprised of a generalized sidelobe canceller (GSC that has two inputs: observations from one ear, and quantized observations from the other ear, and whose output is an estimate of the desired signal. We analyze the performance of this scheme in the presence of a localized interferer as a function of the communication bit rate using the resultant mean-squared error as the signal distortion measure.

  17. Multi-bit wavelength coding phase-shift-keying optical steganography based on amplified spontaneous emission noise

    Science.gov (United States)

    Wang, Cheng; Wang, Hongxiang; Ji, Yuefeng

    2018-01-01

    In this paper, a multi-bit wavelength coding phase-shift-keying (PSK) optical steganography method is proposed based on amplified spontaneous emission noise and wavelength selection switch. In this scheme, the assignment codes and the delay length differences provide a large two-dimensional key space. A 2-bit wavelength coding PSK system is simulated to show the efficiency of our proposed method. The simulated results demonstrate that the stealth signal after encoded and modulated is well-hidden in both time and spectral domains, under the public channel and noise existing in the system. Besides, even the principle of this scheme and the existence of stealth channel are known to the eavesdropper, the probability of recovering the stealth data is less than 0.02 if the key is unknown. Thus it can protect the security of stealth channel more effectively. Furthermore, the stealth channel will results in 0.48 dB power penalty to the public channel at 1 × 10-9 bit error rate, and the public channel will have no influence on the receiving of the stealth channel.

  18. Improved Energy Efficiency for Optical Transport Networks by Elastic Forward Error Correction

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Yankov, Metodi Plamenov; Berger, Michael Stübert

    2014-01-01

    is designed to work as a transparent add-on to transceivers running the optical transport network (OTN) protocol, adding an extra layer of elastic soft-decision FEC to the built-in hard-decision FEC implemented in OTN, while retaining interoperability with existing OTN equipment. In order to facilitate......In this paper we propose a scheme for reducing the energy consumption of optical links by means of adaptive forward error correction (FEC). The scheme works by performing on the fly adjustments to the code rate of the FEC, adding extra parity bits to the data stream whenever extra capacity...... the balance between effective data rate and FEC coding gain without any disruption to the live traffic. As a consequence, these automatic adjustments can be performed very often based on the current traffic demand and bit error rate performance of the links through the network. The FEC scheme itself...

  19. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    Science.gov (United States)

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  20. Hey! A Mosquito Bit Me! (For Kids)

    Science.gov (United States)

    ... First Aid & Safety Doctors & Hospitals Videos Recipes for Kids Kids site Sitio para niños How the Body Works ... Español Hey! A Mosquito Bit Me! KidsHealth / For Kids / Hey! A Mosquito Bit Me! Print en español ¡ ...

  1. Pyrosequencing as a tool for the detection of Phytophthora species: error rate and risk of false Molecular Operational Taxonomic Units.

    Science.gov (United States)

    Vettraino, A M; Bonants, P; Tomassini, A; Bruni, N; Vannini, A

    2012-11-01

    To evaluate the accuracy of pyrosequencing for the description of Phytophthora communities in terms of taxa identification and risk of assignment for false Molecular Operational Taxonomic Units (MOTUs). Pyrosequencing of Internal Transcribed Spacer 1 (ITS1) amplicons was used to describe the structure of a DNA mixture comprising eight Phytophthora spp. and Pythium vexans. Pyrosequencing resulted in 16 965 reads, detecting all species in the template DNA mixture. Reducing the ITS1 sequence identity threshold resulted in a decrease in numbers of unmatched reads but a concomitant increase in the numbers of false MOTUs. The total error rate was 0·63% and comprised mainly mismatches (0·25%) Pyrosequencing of ITS1 region is an efficient and accurate technique for the detection and identification of Phytophthora spp. in environmental samples. However, the risk of allocating false MOTUs, even when demonstrated to be low, may require additional validation with alternative detection methods. Phytophthora spp. are considered among the most destructive groups of invasive plant pathogens, affecting thousands of cultivated and wild plants worldwide. Simultaneous early detection of Phytophthora complexes in environmental samples offers an unique opportunity for the interception of known and unknown species along pathways of introduction, along with the identification of these organisms in invaded environments. © 2012 The Authors Letters in Applied Microbiology © 2012 The Society for Applied Microbiology.

  2. Pulse Sign Separation Technique for the Received Bits in Wireless Ultra-Wideband Combination Approach

    Directory of Open Access Journals (Sweden)

    Rashid A. Fayadh

    2014-01-01

    Full Text Available When receiving high data rate in ultra-wideband (UWB technology, many users have experienced multiple-user interference and intersymbol interference in the multipath reception technique. Structures have been proposed for implementing rake receivers to enhance their capabilities by reducing the bit error probability (Pe, thereby providing better performances by indoor and outdoor multipath receivers. As a result, several rake structures have been proposed in the past to reduce the number of resolvable paths that must be estimated and combined. To achieve this aim, we suggest two maximal ratio combiners based on the pulse sign separation technique, such as the pulse sign separation selective combiner (PSS-SC and the pulse sign separation partial combiner (PSS-PC to reduce complexity with fewer fingers and to improve the system performance. In the combiners, a comparator was added to compare the positive quantity of positive pulses and negative quantities of negative pulses to decide whether the transmitted bit was 1 or 0. The Pe was driven by simulation for multipath environments for impulse radio time-hopping binary phase shift keying (TH-BPSK modulation, and the results were compared with those of conventional selective combiners (C-SCs and conventional partial combiners (C-PCs.

  3. Error rate of multi-level rapid prototyping trajectories for pedicle screw placement in lumbar and sacral spine

    Directory of Open Access Journals (Sweden)

    Merc Matjaz

    2014-10-01

    Full Text Available 【Abstract】Objective: Free-hand pedicle screw placement has a high incidence of pedicle perforation which can be reduced with fluoroscopy, navigation or an alternative rapid prototyping drill guide template. In our study the error rate of multi-level templates for pedicle screw placement in lumbar and sacral regions was evaluated. Methods: A case series study was performed on 11 patients. Seventy-two screws were implanted using multilevel drill guide templates manufactured with selective laser sintering. According to the optimal screw direction preoperatively defi ned, an analysis of screw misplacement was performed. Displacement, deviation and screw length difference were measured. The learning curve was also estimated. Results: Twelve screws (17% were placed more than 3.125 mm out of its optimal position in the centre of pedicle. The tip of the 16 screws (22% was misplaced more than 6.25 mm out of the predicted optimal position. According to our predefi ned goal, 19 screws (26% were implanted inaccurately. In 10 cases the screw length was selected incorrectly: 1 (1% screw was too long and 9 (13% were too short. No clinical signs of neurovascular lesion were observed. Learning curve was insignifi cantly noticeable (P=0.129. Conclusion: In our study, the procedure of manufacturing and applying multi-level drill guide templates has a 26% chance of screw misplacement. However, that rate does not coincide with pedicle perforation incidence and neurovascular injury. These facts along with a comparison to compatible studies make it possible to summarize that multi-level templates are satisfactorily accurate and allow precise screw placement with a clinically irrelevant mistake factor. Therefore templates could potentially represent a useful tool for routine pedicle screw placement. Key words: Drill guide; Template; Inaccuracy; Perforation; Radiation exposure

  4. Development of a Tool Condition Monitoring System for Impregnated Diamond Bits in Rock Drilling Applications

    Science.gov (United States)

    Perez, Santiago; Karakus, Murat; Pellet, Frederic

    2017-05-01

    The great success and widespread use of impregnated diamond (ID) bits are due to their self-sharpening mechanism, which consists of a constant renewal of diamonds acting at the cutting face as the bit wears out. It is therefore important to keep this mechanism acting throughout the lifespan of the bit. Nonetheless, such a mechanism can be altered by the blunting of the bit that ultimately leads to a less than optimal drilling performance. For this reason, this paper aims at investigating the applicability of artificial intelligence-based techniques in order to monitor tool condition of ID bits, i.e. sharp or blunt, under laboratory conditions. Accordingly, topologically invariant tests are carried out with sharp and blunt bits conditions while recording acoustic emissions (AE) and measuring-while-drilling variables. The combined output of acoustic emission root-mean-square value (AErms), depth of cut ( d), torque (tob) and weight-on-bit (wob) is then utilized to create two approaches in order to predict the wear state condition of the bits. One approach is based on the combination of the aforementioned variables and another on the specific energy of drilling. The two different approaches are assessed for classification performance with various pattern recognition algorithms, such as simple trees, support vector machines, k-nearest neighbour, boosted trees and artificial neural networks. In general, Acceptable pattern recognition rates were obtained, although the subset composed by AErms and tob excels due to the high classification performances rates and fewer input variables.

  5. High bit depth infrared image compression via low bit depth codecs

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped...... with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth...... images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed...

  6. PERBANDINGAN APLIKASI MENGGUNAKAN METODE CAMELLIA 128 BIT KEY DAN 256 BIT KEY

    Directory of Open Access Journals (Sweden)

    Lanny Sutanto

    2014-01-01

    Full Text Available The rapid development of the Internet today to easily exchange data. This leads to high levels of risk in the data piracy. One of the ways to secure data is using cryptography camellia. Camellia is known as a method that has the encryption and decryption time is fast. Camellia method has three kinds of scale key is 128 bit, 192 bit, and 256 bit.This application is created using the C++ programming language and using visual studio 2010 GUI. This research compare the smallest and largest key size used on the file extension .Txt, .Doc, .Docx, .Jpg, .Mp4, .Mkv and .Flv. This application is made to comparing time and level of security in the use of 128-bit key and 256 bits. The comparison is done by comparing the results of the security value of avalanche effect 128 bit key and 256 bit key.

  7. Semifragile Speech Watermarking Based on Least Significant Bit Replacement of Line Spectral Frequencies

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Nematollahi

    2017-01-01

    Full Text Available There are various techniques for speech watermarking based on modifying the linear prediction coefficients (LPCs; however, the estimated and modified LPCs vary from each other even without attacks. Because line spectral frequency (LSF has less sensitivity to watermarking than LPC, watermark bits are embedded into the maximum number of LSFs by applying the least significant bit replacement (LSBR method. To reduce the differences between estimated and modified LPCs, a checking loop is added to minimize the watermark extraction error. Experimental results show that the proposed semifragile speech watermarking method can provide high imperceptibility and that any manipulation of the watermark signal destroys the watermark bits since manipulation changes it to a random stream of bits.

  8. Steganography forensics method for detecting least significant bit replacement attack

    Science.gov (United States)

    Wang, Xiaofeng; Wei, Chengcheng; Han, Xiao

    2015-01-01

    We present an image forensics method to detect least significant bit replacement steganography attack. The proposed method provides fine-grained forensics features by using the hierarchical structure that combines pixels correlation and bit-planes correlation. This is achieved via bit-plane decomposition and difference matrices between the least significant bit-plane and each one of the others. Generated forensics features provide the susceptibility (changeability) that will be drastically altered when the cover image is embedded with data to form a stego image. We developed a statistical model based on the forensics features and used least square support vector machine as a classifier to distinguish stego images from cover images. Experimental results show that the proposed method provides the following advantages. (1) The detection rate is noticeably higher than that of some existing methods. (2) It has the expected stability. (3) It is robust for content-preserving manipulations, such as JPEG compression, adding noise, filtering, etc. (4) The proposed method provides satisfactory generalization capability.

  9. Bit-string scattering theory

    Energy Technology Data Exchange (ETDEWEB)

    Noyes, H.P.

    1990-01-29

    We construct discrete space-time coordinates separated by the Lorentz-invariant intervals h/mc in space and h/mc{sup 2} in time using discrimination (XOR) between pairs of independently generated bit-strings; we prove that if this space is homogeneous and isotropic, it can have only 1, 2 or 3 spacial dimensions once we have related time to a global ordering operator. On this space we construct exact combinatorial expressions for free particle wave functions taking proper account of the interference between indistinguishable alternative paths created by the construction. Because the end-points of the paths are fixed, they specify completed processes; our wave functions are born collapsed''. A convenient way to represent this model is in terms of complex amplitudes whose squares give the probability for a particular set of observable processes to be completed. For distances much greater than h/mc and times much greater than h/mc{sup 2} our wave functions can be approximated by solutions of the free particle Dirac and Klein-Gordon equations. Using a eight-counter paradigm we relate this construction to scattering experiments involving four distinguishable particles, and indicate how this can be used to calculate electromagnetic and weak scattering processes. We derive a non-perturbative formula relating relativistic bound and resonant state energies to mass ratios and coupling constants, equivalent to our earlier derivation of the Bohr relativistic formula for hydrogen. Using the Fermi-Yang model of the pion as a relativistic bound state containing a nucleon-antinucleon pair, we find that (G{sub {pi}N}{sup 2}){sup 2} = (2m{sub N}/m{sub {pi}}){sup 2} {minus} 1. 21 refs., 1 fig.

  10. Bit-string scattering theory

    International Nuclear Information System (INIS)

    Noyes, H.P.

    1990-01-01

    We construct discrete space-time coordinates separated by the Lorentz-invariant intervals h/mc in space and h/mc 2 in time using discrimination (XOR) between pairs of independently generated bit-strings; we prove that if this space is homogeneous and isotropic, it can have only 1, 2 or 3 spacial dimensions once we have related time to a global ordering operator. On this space we construct exact combinatorial expressions for free particle wave functions taking proper account of the interference between indistinguishable alternative paths created by the construction. Because the end-points of the paths are fixed, they specify completed processes; our wave functions are ''born collapsed''. A convenient way to represent this model is in terms of complex amplitudes whose squares give the probability for a particular set of observable processes to be completed. For distances much greater than h/mc and times much greater than h/mc 2 our wave functions can be approximated by solutions of the free particle Dirac and Klein-Gordon equations. Using a eight-counter paradigm we relate this construction to scattering experiments involving four distinguishable particles, and indicate how this can be used to calculate electromagnetic and weak scattering processes. We derive a non-perturbative formula relating relativistic bound and resonant state energies to mass ratios and coupling constants, equivalent to our earlier derivation of the Bohr relativistic formula for hydrogen. Using the Fermi-Yang model of the pion as a relativistic bound state containing a nucleon-antinucleon pair, we find that (G πN 2 ) 2 = (2m N /m π ) 2 - 1. 21 refs., 1 fig

  11. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    Science.gov (United States)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  12. PRESAGE: Protecting Structured Address Generation against Soft Errors

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    2016-12-28

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.

  13. BitPAl: a bit-parallel, general integer-scoring sequence alignment algorithm.

    Science.gov (United States)

    Loving, Joshua; Hernandez, Yozen; Benson, Gary

    2014-11-15

    Mapping of high-throughput sequencing data and other bulk sequence comparison applications have motivated a search for high-efficiency sequence alignment algorithms. The bit-parallel approach represents individual cells in an alignment scoring matrix as bits in computer words and emulates the calculation of scores by a series of logic operations composed of AND, OR, XOR, complement, shift and addition. Bit-parallelism has been successfully applied to the longest common subsequence (LCS) and edit-distance problems, producing fast algorithms in practice. We have developed BitPAl, a bit-parallel algorithm for general, integer-scoring global alignment. Integer-scoring schemes assign integer weights for match, mismatch and insertion/deletion. The BitPAl method uses structural properties in the relationship between adjacent scores in the scoring matrix to construct classes of efficient algorithms, each designed for a particular set of weights. In timed tests, we show that BitPAl runs 7-25 times faster than a standard iterative algorithm. Source code is freely available for download at http://lobstah.bu.edu/BitPAl/BitPAl.html. BitPAl is implemented in C and runs on all major operating systems. jloving@bu.edu or yhernand@bu.edu or gbenson@bu.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  14. ERROR-CONTROL CODING OF ADS-B MESSAGES FOR IRIDIUM SATELLITES

    Directory of Open Access Journals (Sweden)

    Volodymyr Kharchenko

    2013-12-01

    Full Text Available For modelling of ADS-B messages transmitting on the base of low-orbit satellite constellation Іrіdіum the model of a communication channel “Aircraft - Satellite - Ground Station” was built using MATLAB Sіmulіnk. This model allowed to investigate dependences of the Bit Error Rate on a type of  signal coding/decoding, ratio Eb/N0 and satellite repeater gain

  15. FastBit: Interactively Searching Massive Data

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Ahern, Sean; Bethel, E. Wes; Chen, Jacqueline; Childs, Hank; Cormier-Michel, Estelle; Geddes, Cameron; Gu, Junmin; Hagen, Hans; Hamann, Bernd; Koegler, Wendy; Lauret, Jerome; Meredith, Jeremy; Messmer, Peter; Otoo, Ekow; Perevoztchikov, Victor; Poskanzer, Arthur; Prabhat,; Rubel, Oliver; Shoshani, Arie; Sim, Alexander; Stockinger, Kurt; Weber, Gunther; Zhang, Wei-Ming

    2009-06-23

    As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.

  16. Criteria for core sampling bit temperature monitor

    International Nuclear Information System (INIS)

    Francis, P.M.

    1994-08-01

    A temperature monitoring device needs to be developed for the tank core sampling trucks. It will provide an additional indication of safe drill bit temperatures and give the operator a better feel for the effects of changing drill settings. This document defines the criteria for the bit monitoring system, including performance requirements, information on the core sampling system, and other conditions that may be encountered

  17. Estimation of Errors: Mathematical Expressions of Temperature, Substrate Concentration and Enzyme Concentration based Formulas for obtaining intermediate values of the Rate of Enzymatic Reaction

    OpenAIRE

    Nizam Uddin

    2013-01-01

    This research paper is based on the estimation of errors in the formulas which are used to obtaining intermediate values of the rate of enzymatic reaction. The rate of enzymatic reaction is affected by concentration of substrate, Temperature, concentration of enzyme and other factors. The rise in Temperature accelerates an Enzyme reaction. At certain Temperature known as the optimum Temperature the activity is maximum. The concentration of substrate is the limiting factor, as the substrate co...

  18. An investigation in to the impact of acquisition location on error type and rate when undertaking panoramic radiography.

    Science.gov (United States)

    Loughlin, A; Drage, N; Greenall, C; Farnell, D J J

    2017-11-01

    Panoramic radiography is a common radiographic examination carried out in the UK. This study was carried out to determine if acquisition site has an impact on image quality. An image quality audit was carried out in South Wales across a number of dental and general radiology settings. The image quality was assessed retrospectively against national standards. A total of 174 radiographs were assessed from general radiology departments and 141 from dental radiology units. Chi-squared analysis was used to investigate whether there were differences in the grading between dental radiology units and general radiology departments. Differences between the two settings in terms of the number of errors in the radiographs was analysed using the Mann-Whitney test. Chi-squared analysis was used to see if there were differences between the types of errors in the two clinical settings. There was a significant association (p = 0.021) between the quality of the radiograph grading and type of radiology department. However when excellent and diagnostically acceptable radiographs were grouped together there was no significant difference between the two clinical settings. Although the vast majority of radiographs were diagnostic (89% for general radiology and 92% for dental radiology units), neither reached the required standards. The most common errors were patient positioning errors (54.6% radiographs affected) and preparation/instructional errors (47.9% radiographs affected). Errors in panoramic radiography are relatively high and further instruction to staff undertaking these procedures is required to ensure the targets are reached. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

  19. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.

  20. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Science.gov (United States)

    Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn

    2013-01-01

    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID).

  1. 0011-0030.IEEE 754: 64 Bit Double Precision FloatsThis.pdf | 01 ...

    Indian Academy of Sciences (India)

    Home; public; Volumes; reso; 021; 01; 0011-0030.IEEE 754: 64 Bit Double Precision FloatsThis.pdf. 404! error. The page your are looking for can not be found! Please check the link or use the navigation bar at the top. YouTube; Twitter; Facebook; Blog. Academy News. IAS Logo. 29th Mid-year meeting. Posted on 19 ...

  2. Bit-Grooming: Shave Your Bits with Razor-sharp Precision

    Science.gov (United States)

    Zender, C. S.; Silver, J.

    2017-12-01

    Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.

  3. Soft Error Vulnerability of Iterative Linear Algebra Methods

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; de Supinski, B

    2007-12-15

    Devices become increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft errors primarily caused problems for space and high-atmospheric computing applications. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming significant even at terrestrial altitudes. The soft error vulnerability of iterative linear algebra methods, which many scientific applications use, is a critical aspect of the overall application vulnerability. These methods are often considered invulnerable to many soft errors because they converge from an imprecise solution to a precise one. However, we show that iterative methods can be vulnerable to soft errors, with a high rate of silent data corruptions. We quantify this vulnerability, with algorithms generating up to 8.5% erroneous results when subjected to a single bit-flip. Further, we show that detecting soft errors in an iterative method depends on its detailed convergence properties and requires more complex mechanisms than simply checking the residual. Finally, we explore inexpensive techniques to tolerate soft errors in these methods.

  4. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    Energy Technology Data Exchange (ETDEWEB)

    Boehnke, E McKenzie; DeMarco, J; Steers, J; Fraass, B [Cedars-Sinai Medical Center, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readings are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.

  5. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    Science.gov (United States)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  6. Studying and comparing spectrum efficiency and error probability in GMSK and DBPSK modulation schemes

    Directory of Open Access Journals (Sweden)

    Juan Mario Torres Nova

    2008-09-01

    Full Text Available Gaussian minimum shift keying (GMSK and differential binary phase shift keying (DBPSK are two digital modulation schemes which are -frequently used in radio communication systems; however, there is interdependence in the use of its benefits (spectral efficiency, low bit error rate, low inter symbol interference, etc. Optimising one parameter creates problems for another; for example, the GMSK scheme succeeds in reducing bandwidth when introducing a Gaussian filter into an MSK (minimum shift ke-ying modulator in exchange for increasing inter-symbol interference in the system. The DBPSK scheme leads to lower error pro-bability, occupying more bandwidth; it likewise facilitates synchronous data transmission due to the receiver’s bit delay when re-covering a signal.

  7. Throughput Estimation Method in Burst ACK Scheme for Optimizing Frame Size and Burst Frame Number Appropriate to SNR-Related Error Rate

    Science.gov (United States)

    Ohteru, Shoko; Kishine, Keiji

    The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.

  8. Stochastic p-Bits for Invertible Logic

    Directory of Open Access Journals (Sweden)

    Kerem Yunus Camsari

    2017-07-01

    Full Text Available Conventional semiconductor-based logic and nanomagnet-based memory devices are built out of stable, deterministic units such as standard metal-oxide semiconductor transistors, or nanomagnets with energy barriers in excess of ≈40–60  kT. In this paper, we show that unstable, stochastic units, which we call “p-bits,” can be interconnected to create robust correlations that implement precise Boolean functions with impressive accuracy, comparable to standard digital circuits. At the same time, they are invertible, a unique property that is absent in standard digital circuits. When operated in the direct mode, the input is clamped, and the network provides the correct output. In the inverted mode, the output is clamped, and the network fluctuates among all possible inputs that are consistent with that output. First, we present a detailed implementation of an invertible gate to bring out the key role of a single three-terminal transistorlike building block to enable the construction of correlated p-bit networks. The results for this specific, CMOS-assisted nanomagnet-based hardware implementation agree well with those from a universal model for p-bits, showing that p-bits need not be magnet based: any three-terminal tunable random bit generator should be suitable. We present a general algorithm for designing a Boltzmann machine (BM with a symmetric connection matrix [J] (J_{ij}=J_{ji} that implements a given truth table with p-bits. The [J] matrices are relatively sparse with a few unique weights for convenient hardware implementation. We then show how BM full adders can be interconnected in a partially directed manner (J_{ij}≠J_{ji} to implement large logic operations such as 32-bit binary addition. Hundreds of stochastic p-bits get precisely correlated such that the correct answer out of 2^{33} (≈8×10^{9} possibilities can be extracted by looking at the statistical mode or majority vote of a number of time samples. With perfect

  9. Stochastic p -Bits for Invertible Logic

    Science.gov (United States)

    Camsari, Kerem Yunus; Faria, Rafatul; Sutton, Brian M.; Datta, Supriyo

    2017-07-01

    Conventional semiconductor-based logic and nanomagnet-based memory devices are built out of stable, deterministic units such as standard metal-oxide semiconductor transistors, or nanomagnets with energy barriers in excess of ≈40 - 60 kT . In this paper, we show that unstable, stochastic units, which we call "p -bits," can be interconnected to create robust correlations that implement precise Boolean functions with impressive accuracy, comparable to standard digital circuits. At the same time, they are invertible, a unique property that is absent in standard digital circuits. When operated in the direct mode, the input is clamped, and the network provides the correct output. In the inverted mode, the output is clamped, and the network fluctuates among all possible inputs that are consistent with that output. First, we present a detailed implementation of an invertible gate to bring out the key role of a single three-terminal transistorlike building block to enable the construction of correlated p -bit networks. The results for this specific, CMOS-assisted nanomagnet-based hardware implementation agree well with those from a universal model for p -bits, showing that p -bits need not be magnet based: any three-terminal tunable random bit generator should be suitable. We present a general algorithm for designing a Boltzmann machine (BM) with a symmetric connection matrix [J ] (Ji j=Jj i) that implements a given truth table with p -bits. The [J ] matrices are relatively sparse with a few unique weights for convenient hardware implementation. We then show how BM full adders can be interconnected in a partially directed manner (Ji j≠Jj i) to implement large logic operations such as 32-bit binary addition. Hundreds of stochastic p -bits get precisely correlated such that the correct answer out of 233 (≈8 ×1 09) possibilities can be extracted by looking at the statistical mode or majority vote of a number of time samples. With perfect directivity (Jj i=0 ) a small

  10. Using magnetic permeability bits to store information

    Science.gov (United States)

    Timmerwilke, John; Petrie, J. R.; Wieland, K. A.; Mencia, Raymond; Liou, Sy-Hwang; Cress, C. D.; Newburgh, G. A.; Edelstein, A. S.

    2015-10-01

    Steps are described in the development of a new magnetic memory technology, based on states with different magnetic permeability, with the capability to reliably store large amounts of information in a high-density form for decades. The advantages of using the permeability to store information include an insensitivity to accidental exposure to magnetic fields or temperature changes, both of which are known to corrupt memory approaches that rely on remanent magnetization. The high permeability media investigated consists of either films of Metglas 2826 MB (Fe40Ni38Mo4B18) or bilayers of permalloy (Ni78Fe22)/Cu. Regions of films of the high permeability media were converted thermally to low permeability regions by laser or ohmic heating. The permeability of the bits was read by detecting changes of an external 32 Oe probe field using a magnetic tunnel junction 10 μm away from the media. Metglas bits were written with 100 μs laser pulses and arrays of 300 nm diameter bits were read. The high and low permeability bits written using bilayers of permalloy/Cu are not affected by 10 Mrad(Si) of gamma radiation from a 60Co source. An economical route for writing and reading bits as small at 20 nm using a variation of heat assisted magnetic recording is discussed.

  11. The best bits in an iris code.

    Science.gov (United States)

    Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J

    2009-06-01

    Iris biometric systems apply filters to iris images to extract information about iris texture. Daugman's approach maps the filter output to a binary iris code. The fractional Hamming distance between two iris codes is computed and decisions about the identity of a person are based on the computed distance. The fractional Hamming distance weights all bits in an iris code equally. However, not all the bits in an iris code are equally useful. Our research is the first to present experiments documenting that some bits are more consistent than others. Different regions of the iris are compared to evaluate their relative consistency, and contrary to some previous research, we find that the middle bands of the iris are more consistent than the inner bands. The inconsistent-bit phenomenon is evident across genders and different filter types. Possible causes of inconsistencies, such as segmentation, alignment issues, and different filters are investigated. The inconsistencies are largely due to the coarse quantization of the phase response. Masking iris code bits corresponding to complex filter responses near the axes of the complex plane improves the separation between the match and nonmatch Hamming distance distributions.

  12. Bits extraction for palmprint template protection with Gabor magnitude and multi-bit quantization

    NARCIS (Netherlands)

    Mu, Meiru; Shao, X.; Ruan, Qiuqi; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2013-01-01

    In this paper, we propose a method of fixed-length binary string extraction (denoted by LogGM_DROBA) from low-resolution palmprint image for developing palmprint template protection technology. In order to extract reliable (stable and discriminative) bits, multi-bit equal-probability-interval

  13. Performance enhancement of MC-CDMA system through novel sensitive bit algorithm aided turbo multi user detection.

    Science.gov (United States)

    Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan

    2015-01-01

    Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI.

  14. Performance enhancement of MC-CDMA system through novel sensitive bit algorithm aided turbo multi user detection.

    Directory of Open Access Journals (Sweden)

    Rasadurai Kumaravel

    Full Text Available Multi carrier code division multiple access (MC-CDMA system is a promising multi carrier modulation (MCM technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA and orthogonal frequency division multiplexing (OFDM. The OFDM parts reduce multipath fading and inter symbol interference (ISI and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI appears. The MAI is one of the factors that degrade the bit error rate (BER performance of MC-CDMA system. The multiuser detection (MUD and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA aided logarithmic-Maximum a-Posteriori algorithm (Log MAP based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI.

  15. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  16. Color characters for white hot string bits

    Science.gov (United States)

    Curtright, Thomas L.; Raha, Sourav; Thorn, Charles B.

    2017-10-01

    The state space of a generic string bit model is spanned by N ×N matrix creation operators acting on a vacuum state. Such creation operators transform in the adjoint representation of the color group U (N ) [or S U (N ) if the matrices are traceless]. We consider a system of b species of bosonic bits and f species of fermionic bits. The string, emerging in the N →∞ limit, identifies P+=m M √{2 } where M is the bit number operator and P-=H √{2 } where H is the system Hamiltonian. We study the thermal properties of this string bit system in the case H =0 , which can be considered the tensionless string limit: the only dynamics is restricting physical states to color singlets. Then the thermal partition function Tr e-β m M can be identified, putting x =e-β m, with a generating function χ0b f(x ), for which the coefficient of xn in its expansion about x =0 is the number of color singlets with bit number M =n . This function is a purely group theoretic object, which is well studied in the literature. We show that at N =∞ this system displays a Hagedorn divergence at x =1 /(b +f ) with ultimate temperature TH=m /ln (b +f ). The corresponding function for finite N is perfectly finite for 0

  17. Performance of an Error Control System with Turbo Codes in Powerline Communications

    Directory of Open Access Journals (Sweden)

    Balbuena-Campuzano Carlos Alberto

    2014-07-01

    Full Text Available This paper reports the performance of turbo codes as an error control technique in PLC (Powerline Communications data transmissions. For this system, computer simulations are used for modeling data networks based on the model classified in technical literature as indoor, and uses OFDM (Orthogonal Frequency Division Multiplexing as a modulation technique. Taking into account the channel, modulation and turbo codes, we propose a methodology to minimize the bit error rate (BER, as a function of the average received signal noise ratio (SNR.

  18. Introduction to bit slices and microprogramming

    International Nuclear Information System (INIS)

    Van Dam, A.

    1981-01-01

    Bit-slice logic blocks are fourth-generation LSI components which are natural extensions of traditional mulitplexers, registers, decoders, counters, ALUs, etc. Their functionality is controlled by microprogramming, typically to implement CPUs and peripheral controllers where both speed and easy programmability are required for flexibility, ease of implementation and debugging, etc. Processors built from bit-slice logic give the designer an alternative for approaching the programmibility of traditional fixed-instruction-set microprocessors with a speed closer to that of hardwired random logic. (orig.)

  19. Who Do Hospital Physicians and Nurses Go to for Advice About Medications? A Social Network Analysis and Examination of Prescribing Error Rates.

    Science.gov (United States)

    Creswick, Nerida; Westbrook, Johanna Irene

    2015-09-01

    To measure the weekly medication advice-seeking networks of hospital staff, to compare patterns across professional groups, and to examine these in the context of prescribing error rates. A social network analysis was conducted. All 101 staff in 2 wards in a large, academic teaching hospital in Sydney, Australia, were surveyed (response rate, 90%) using a detailed social network questionnaire. The extent of weekly medication advice seeking was measured by density of connections, proportion of reciprocal relationships by reciprocity, number of colleagues to whom each person provided advice by in-degree, and perceptions of amount and impact of advice seeking between physicians and nurses. Data on prescribing error rates from the 2 wards were compared. Weekly medication advice-seeking networks were sparse (density: 7% ward A and 12% ward B). Information sharing across professional groups was modest, and rates of reciprocation of advice were low (9% ward A, 14% ward B). Pharmacists provided advice to most people, and junior physicians also played central roles. Senior physicians provided medication advice to few people. Many staff perceived that physicians rarely sought advice from nurses when prescribing, but almost all believed that an increase in communication between physicians and nurses about medications would improve patient safety. The medication networks in ward B had higher measures for density, reciprocation, and fewer senior physicians who were isolates. Ward B had a significantly lower rate of both procedural and clinical prescribing errors than ward A (0.63 clinical prescribing errors per admission [95%CI, 0.47-0.79] versus 1.81/ admission [95%CI, 1.49-2.13]). Medication advice-seeking networks among staff on hospital wards are limited. Hubs of advice provision include pharmacists, junior physicians, and senior nurses. Senior physicians are poorly integrated into medication advice networks. Strategies to improve the advice-giving networks between senior

  20. Effects of body mass index and step rate on pedometer error in a free-living environment.

    Science.gov (United States)

    Tyo, Brian M; Fitzhugh, Eugene C; Bassett, David R; John, Dinesh; Feito, Yuri; Thompson, Dixie L

    2011-02-01

    Pedometers could provide great insights into walking habits if they are found to be accurate for people of all weight categories. the purposes of this study were to determine whether the New Lifestyles NL-2000 (NL) and the Digi-Walker SW-200 (DW) yield similar daily step counts as compared with the StepWatch 3 (SW) in a free-living environment and to determine whether pedometer error is influenced by body mass index (BMI) and speed of walking. The SW served as the criterion because of its accuracy across a range of speeds and BMI categories. Slow walking was defined as ≤80 steps per minute. fifty-six adults (mean ± SD: age = 32.7 ± 14.5 yr) wore the devices for 7 d. There were 20 normal weight, 18 overweight, and 18 obese participants. A two-way repeated-measures ANOVA was performed to determine whether BMI and device were related to number of steps counted per day. Stepwise linear regressions were performed to determine what variables contributed to NL and DW error. both the NL and the DW recorded fewer steps than the SW (P strengths and limitations of step counters before making an informed decision about which device to use.

  1. Error Rates of M-PAM and M-QAM in Generalized Fading and Generalized Gaussian Noise Environments

    KAUST Repository

    Soury, Hamza

    2013-07-01

    This letter investigates the average symbol error probability (ASEP) of pulse amplitude modulation and quadrature amplitude modulation coherent signaling over flat fading channels subject to additive white generalized Gaussian noise. The new ASEP results are derived in a generic closed-form in terms of the Fox H function and the bivariate Fox H function for the extended generalized-K fading case. The utility of this new general closed-form is that it includes some special fading distributions, like the Generalized-K, Nakagami-m, and Rayleigh fading and special noise distributions such as Gaussian and Laplacian. Some of these special cases are also treated and are shown to yield simplified results.

  2. Robust Face Recognition using Voting by Bit-plane Images based on Sparse Representation

    Directory of Open Access Journals (Sweden)

    Dongmei Wei

    2015-08-01

    Full Text Available Plurality voting is widely employed as combination strategies in pattern recognition. As a technology proposed recently, sparse representation based classification codes the query image as a sparse linear combination of entire training images and classifies the query sample class by class exploiting the class representation error. In this paper, an improvement face recognition approach using sparse representation and plurality voting based on the binary bit-plane images is proposed. After being equalized, gray images are decomposed into eight bit-plane images, sparse representation based classification is exploited respectively on the five bit-plane images that have more discrimination information. Finally, the true identity of query image is voted by these five identities obtained. Experiment results shown that this proposed approach is preferable both in recognition accuracy and in recognition speed.

  3. Linear, Constant-rounds Bit-decomposition

    DEFF Research Database (Denmark)

    Reistad, Tord; Toft, Tomas

    2010-01-01

    When performing secure multiparty computation, tasks may often be simple or difficult depending on the representation chosen. Hence, being able to switch representation efficiently may allow more efficient protocols. We present a new protocol for bit-decomposition: converting a ring element x ∈ ℤ M...

  4. Entropy of a bit-shift channel

    NARCIS (Netherlands)

    Baggen, Stan; Balakirsky, Vladimir; Denteneer, Dee; Egner, Sebastian; Hollmann, Henk; Tolhuizen, Ludo; Verbitskiy, Evgeny

    2006-01-01

    We consider a simple transformation (coding) of an iid source called a bit-shift channel. This simple transformation occurs naturally in magnetic or optical data storage. The resulting process is not Markov of any order. We discuss methods of computing the entropy of the transformed process, and

  5. Hey! A Black Widow Spider Bit Me!

    Science.gov (United States)

    ... as soon as you can because they can make you very sick. With an adult's help, wash the bite well with soap and water. Then apply an ice pack to the bite, and try to elevate the area and keep it still to help prevent the ... black widows, you'll want to make sure that's the kind of spider that bit ...

  6. Effect of video decoder errors on video interpretability

    Science.gov (United States)

    Young, Darrell L.

    2014-06-01

    The advancement in video compression technology can result in more sensitivity to bit errors. Bit errors can propagate causing sustained loss of interpretability. In the worst case, the decoder "freezes" until it can re-synchronize with the stream. Detection of artifacts enables downstream processes to avoid corrupted frames. A simple template approach to detect block stripes and a more advanced cascade approach to detect compression artifacts was shown to correlate to the presence of artifacts and decoder messages.

  7. Pulse shaping for high data rate ultra-wideband wireless transmission under the Russian spectral emission mask

    DEFF Research Database (Denmark)

    Rommel, Simon; Grakhova, Elizaveta P.; Jurado-Navas, Antonio

    2017-01-01

    This paper addresses impulse-radio ultra-wideband (IR-UWB) transmission under the Russian spectral emission mask for unlicensed UWB radio communications. Four pulse shapes are proposed and their bit error rate (BER) performance is both estimated analytically and evaluated experimentally. Well...

  8. Prediction of error rates in dose-imprinted memories on board CRRES by two different methods. [Combined Release and Radiation Effects Satellite

    Science.gov (United States)

    Brucker, G. J.; Stassinopoulos, E. G.

    1991-01-01

    An analysis of the expected space radiation effects on the single event upset (SEU) properties of CMOS/bulk memories onboard the Combined Release and Radiation Effects Satellite (CRRES) is presented. Dose-imprint data from ground test irradiations of identical devices are applied to the predictions of cosmic-ray-induced space upset rates in the memories onboard the spacecraft. The calculations take into account the effect of total dose on the SEU sensitivity of the devices as the dose accumulates in orbit. Estimates of error rates, which involved an arbitrary selection of a single pair of threshold linear energy transfer (LET) and asymptotic cross-section values, were compared to the results of an integration over the cross-section curves versus LET. The integration gave lower upset rates than the use of the selected values of the SEU parameters. Since the integration approach is more accurate and eliminates the need for an arbitrary definition of threshold LET and asymptotic cross section, it is recommended for all error rate predictions where experimental sigma-versus-LET curves are available.

  9. Two research contributions in 64-bit computing: Testing and Applications

    OpenAIRE

    Chang, Victor

    2005-01-01

    Following the release of Windows 64-bit and Redhat Linux 64-bit operating systems (OS) in late April 2005, this is the one of the first 64-bit OS research project completed in a British university. The objective is to investigate (1) the increase/decrease in performance compared to 32-bit computing; (2) the techniques used to develop 64-bit applications; and (3) how 64-bit computing should be used in IT and research organizations to improve their work. This paper summarizes research discoveri...

  10. An evaluation of a Low-Dose-Rate (LDR) brachytherapy procedure using a systems engineering & error analysis methodology for health care (SEABH) - (SAVE)

    LENUS (Irish Health Repository)

    Chadwick, Liam

    2012-03-12

    Health Care Failure Modes and Effects Analysis (HFMEA®) is an established tool for risk assessment in health care. A number of deficiencies have been identified in the method. A new method called Systems and Error Analysis Bundle for Health Care (SEABH) was developed to address these deficiencies. SEABH has been applied to a number of medical processes as part of its validation and testing. One of these, Low Dose Rate (LDR) prostate Brachytherapy is reported in this paper. The case study supported the validity of SEABH with respect to its capacity to address the weaknesses of (HFMEA®).

  11. A multiple-substream unequal error-protection and error-concealment algorithm for SPIHT-coded video bitstreams.

    Science.gov (United States)

    Kim, Joohee; Mersereau, Russell M; Altunbasak, Yucel

    2004-12-01

    This paper presents a coordinated multiple-substream unequal error-protection and error-concealment algorithm for SPIHT-coded bitstreams transmitted over lossy channels. In the proposed scheme, we divide the video sequence corresponding to a group of pictures into two subsequences and independently encode each subsequence using a three-dimensional SPIHT algorithm. We use two different partitioning schemes to generate the substreams, each of which offers some advantages under the appropriate channel condition. Each substream is protected by an FEC-based unequal error-protection algorithm, which assigns unequal forward error correction codes to each bit plane. Any information that is lost during the transmission for any substream is estimated at the receiver by using the correlation between the substreams and the smoothness of the video signal. Simulation results show that the proposed multiple-substream UEP algorithm is simple, fast, and robust in hostile network conditions, and that the proposed error-concealment algorithm can achieve 2-3-dB PSNR gain over the case when error concealment is not used at high packet-loss rates.

  12. Choice of reference sequence and assembler for alignment of Listeria monocytogenes short-read sequence data greatly influences rates of error in SNP analyses.

    Directory of Open Access Journals (Sweden)

    Arthur W Pightling

    Full Text Available The wide availability of whole-genome sequencing (WGS and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i depth of sequencing coverage, ii choice of reference-guided short-read sequence assembler, iii choice of reference genome, and iv whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT, using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming. We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers

  13. Choice of reference sequence and assembler for alignment of Listeria monocytogenes short-read sequence data greatly influences rates of error in SNP analyses.

    Science.gov (United States)

    Pightling, Arthur W; Petronella, Nicholas; Pagotto, Franco

    2014-01-01

    The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should

  14. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes.

    Directory of Open Access Journals (Sweden)

    Amr M Elhelw

    Full Text Available High peak to average power ratio (PAPR is one of the major problems of OFDM systems. Selected mapping (SLM is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate.

  15. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes

    Science.gov (United States)

    Elhelw, Amr M.; Badran, Ehab F.

    2015-01-01

    High peak to average power ratio (PAPR) is one of the major problems of OFDM systems. Selected mapping (SLM) is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI) index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI) for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate. PMID:26018504

  16. HIGH-POWER TURBODRILL AND DRILL BIT FOR DRILLING WITH COILED TUBING

    Energy Technology Data Exchange (ETDEWEB)

    Robert Radtke; David Glowka; Man Mohan Rai; David Conroy; Tim Beaton; Rocky Seale; Joseph Hanna; Smith Neyrfor; Homer Robertson

    2008-03-31

    Commercial introduction of Microhole Technology to the gas and oil drilling industry requires an effective downhole drive mechanism which operates efficiently at relatively high RPM and low bit weight for delivering efficient power to the special high RPM drill bit for ensuring both high penetration rate and long bit life. This project entails developing and testing a more efficient 2-7/8 in. diameter Turbodrill and a novel 4-1/8 in. diameter drill bit for drilling with coiled tubing. The high-power Turbodrill were developed to deliver efficient power, and the more durable drill bit employed high-temperature cutters that can more effectively drill hard and abrasive rock. This project teams Schlumberger Smith Neyrfor and Smith Bits, and NASA AMES Research Center with Technology International, Inc (TII), to deliver a downhole, hydraulically-driven power unit, matched with a custom drill bit designed to drill 4-1/8 in. boreholes with a purpose-built coiled tubing rig. The U.S. Department of Energy National Energy Technology Laboratory has funded Technology International Inc. Houston, Texas to develop a higher power Turbodrill and drill bit for use in drilling with a coiled tubing unit. This project entails developing and testing an effective downhole drive mechanism and a novel drill bit for drilling 'microholes' with coiled tubing. The new higher power Turbodrill is shorter, delivers power more efficiently, operates at relatively high revolutions per minute, and requires low weight on bit. The more durable thermally stable diamond drill bit employs high-temperature TSP (thermally stable) diamond cutters that can more effectively drill hard and abrasive rock. Expectations are that widespread adoption of microhole technology could spawn a wave of 'infill development' drilling of wells spaced between existing wells, which could tap potentially billions of barrels of bypassed oil at shallow depths in mature producing areas. At the same time, microhole

  17. Single Bit Radar Systems for Digital Integration

    OpenAIRE

    Bjørndal, Øystein

    2017-01-01

    Small, low cost, radar systems have exciting applications in monitoring and imaging for the industrial, healthcare and Internet of Things (IoT) sectors. We here explore, and show the feasibility of, several single bit square wave radar architectures; that benefits from the continuous improvement in digital technologies for system-on-chip digital integration. By analysis, simulation and measurements we explore novel and harmonic-rich continuous wave (CW), stepped-frequency CW (SFCW) and freque...

  18. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  19. A Novel Least Significant Bit First Processing Parallel CRC Circuit

    OpenAIRE

    Xiujie Qu; Zhongkai Cao; Zhanjie Yang

    2013-01-01

    In HDLC serial communication protocol, CRC calculation can first process the most or least significant bit of data. Nowadays most CRC calculation is based on the most significant bit (MSB) first processing. An algorithm of the least significant bit (LSB) first processing parallel CRC is proposed in this paper. Based on the general expression of the least significant bit first processing serial CRC, using state equation method of linear system, we derive a recursive formula by the mathematical...

  20. Method to manufacture bit patterned magnetic recording media

    Science.gov (United States)

    Raeymaekers, Bart; Sinha, Dipen N

    2014-05-13

    A method to increase the storage density on magnetic recording media by physically separating the individual bits from each other with a non-magnetic medium (so-called bit patterned media). This allows the bits to be closely packed together without creating magnetic "cross-talk" between adjacent bits. In one embodiment, ferromagnetic particles are submerged in a resin solution, contained in a reservoir. The bottom of the reservoir is made of piezoelectric material.

  1. Development of an RSFQ 4-bit ALU

    International Nuclear Information System (INIS)

    Kim, J. Y.; Baek, S. H.; Kim, S. H.; Kang, K. R.; Jung, K. R.; Lim, H. Y.; Park, J. H.; Han, T. S.

    2005-01-01

    We have developed and tested an RSFQ 4-bit Arithmetic Logic Unit (ALU) based on half adder cells and de switches. ALU is a core element of a computer processor that performs arithmetic and logic operations on the operands in computer instruction words. The designed ALU had limited operation functions of OR, AND, XOR, and ADD. It had a pipeline structure. We have simulated the circuit by using Josephson circuit simulation tools in order to reduce the timing problem, and confirmed the correct operation of the designed ALU. We used simulation tools of XIC TM ,WRspice TM , and Julia. The fabricated 4-bit ALU circuit had a size of 3000 calum X 1500, and the chip size was 5 mm X 5 mm. The test speeds were 1000 kHz and 5 GHz. For high-speed test, we used an eye-diagram technique. Our 4-bit ALU operated correctly up to 5 GHz clock frequency. The chip was tested at the liquid-helium temperature.

  2. Efficient bit sifting scheme of post-processing in quantum key distribution

    Science.gov (United States)

    Li, Qiong; Le, Dan; Wu, Xianyan; Niu, Xiamu; Guo, Hong

    2015-10-01

    Bit sifting is an important step in the post-processing of quantum key distribution (QKD). Its function is to sift out the undetected original keys. The communication traffic of bit sifting has essential impact on the net secure key rate of a practical QKD system. In this paper, an efficient bit sifting scheme is presented, of which the core is a lossless source coding algorithm. Both theoretical analysis and experimental results demonstrate that the performance of the scheme is approaching the Shannon limit. The proposed scheme can greatly decrease the communication traffic of the post-processing of a QKD system, which means the proposed scheme can decrease the secure key consumption for classical channel authentication and increase the net secure key rate of the QKD system, as demonstrated by analyzing the improvement on the net secure key rate. Meanwhile, some recommendations on the application of the proposed scheme to some representative practical QKD systems are also provided.

  3. Alpha-particle-induced soft errors in high speed bipolar RAM

    International Nuclear Information System (INIS)

    Mitsusada, Kazumichi; Kato, Yukio; Yamaguchi, Kunihiko; Inadachi, Masaaki

    1980-01-01

    As bipolar RAM (Random Access Memory) has been improved to a fast acting and highly integrated device, the problems negligible in the past have become the ones that can not be ignored. The problem of a-particles emitted from the radioactive substances in semiconductor package materials should be specifically noticed, which cause soft errors. The authors have produced experimentally the special 1 kbit bipolar RAM to investigate its soft errors. The package used was the standard 16 pin dual in-line type, with which the practical system mounting test and a-particle irradiation test have been performed. The results showed the occurrence of soft errors at the average rate of about 1 bit/700 device hour. It is concluded that the cause was due to the a-particles emitted from the package materials, and at the same time, it was found that the rate of soft error occurrence was able to be greatly reduced by shielding a-particles. The error rate significantly increased with the decrease of the stand-by current of memory cells and with the accumulated charge determined by time constant. The mechanism of soft error was also investigated, for which an approximate model to estimate the error rate by means of the effective noise charge due to a-particles and of the amount of reversible charges of memory cells is shown to compare it with the experimental results. (Wakatsuki, Y.)

  4. Increased error rates in preliminary reports issued by radiology residents working more than 10 consecutive hours overnight.

    Science.gov (United States)

    Ruutiainen, Alexander T; Durand, Daniel J; Scanlon, Mary H; Itri, Jason N

    2013-03-01

    To determine if the rate of major discrepancies between resident preliminary reports and faculty final reports increases during the final hours of consecutive 12-hour overnight call shifts. Institutional review board exemption status was obtained for this study. All overnight radiology reports interpreted by residents on-call between January 2010 and June 2010 were reviewed by board-certified faculty and categorized as major discrepancies if they contained a change in interpretation with the potential to impact patient management or outcome. Initial determination of a major discrepancy was at the discretion of individual faculty radiologists based on this general definition. Studies categorized as major discrepancies were secondarily reviewed by the residency program director (M.H.S.) to ensure consistent application of the major discrepancy designation. Multiple variables associated with each report were collected and analyzed, including the time of preliminary interpretation, time into shift study was interpreted, volume of studies interpreted during each shift, day of the week, patient location (inpatient or emergency department), block of shift (2-hour blocks for 12-hour shifts), imaging modality, patient age and gender, resident identification, and faculty identification. Univariate risk factor analysis was performed to determine the optimal data format of each variable (ie, continuous versus categorical). A multivariate logistic regression model was then constructed to account for confounding between variables and identify independent risk factors for major discrepancies. We analyzed 8062 preliminary resident reports with 79 major discrepancies (1.0%). There was a statistically significant increase in major discrepancy rate during the final 2 hours of consecutive 12-hour call shifts. Multivariate analysis confirmed that interpretation during the last 2 hours of 12-hour call shifts (odds ratio (OR) 1.94, 95% confidence interval (CI) 1.18-3.21), cross

  5. Bit selection using field drilling data and mathematical investigation

    Science.gov (United States)

    Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.

    2018-03-01

    A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.

  6. Study of the laws governing wear of cutter bits

    Energy Technology Data Exchange (ETDEWEB)

    Potrovka, S.

    1979-01-01

    A study was made of the laws governing the change in drilling of a bit in the process of ramming depending on the wear of the cutter bit. Experiments were conducted on the drilling stand ZIF-1200A by 3-cutter bits V-140T with cemented fittings and surfacing of the rear part of the external cutter bit crowns. Experimental data are presented from studying the laws governing the change in the current drilling of the bit and the corresponding wear depending on the total number of bit rotations during drilling of gray granite. Dependences are also indicated for drilling on the bit of the current mechanical drilling velocity and the mechanical drilling velocity during one rotation on the total number of bit rotations, as well as the mechanical drilling velocity on drilling per bit during drilling of gray granite. It was established that the efficient time for stay of the bit on the face both with minimum cost of 1 m of drilling, and with maximum per-trip velocity depends on the parameters of the drilling regime, the strength of the rocks, the depth of drilling and the standard indicators for the cost of rolling the equipment in 1 min, and the cost of the drill bit. Experimental data were obtained which make it possible to rapidly determine the efficient time for lifting the bit and to use for this purpose simple resources of computers.

  7. Downlink Error Rates of Half-duplex Users in Full-duplex Networks over a Laplacian Inter-User Interference Limited and EGK fading

    KAUST Repository

    Soury, Hamza

    2017-03-14

    This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have non-line of sight (NLOS) interfering link. Consequently, we study the interferer limited problem that appears between NLOS HD users-pair that are scheduled on the same FD channel. The distribution of the interference is first characterized via its distribution function, which is derived in closed form. Then, a comprehensive performance assessment for the proposed pairing scheme is provided by assuming Extended Generalized- $cal{K}$ (EGK) fading for the downlink and studying different modulation schemes. To this end, a unified closed form expression for the average symbol error rate is derived. Furthermore, we show the effective downlink throughput gain harvested by the pairing NLOS users as a function of the average signal-to-interferenceratio when compared to an idealized HD scenario with neither interference nor noise. Finally, we show the minimum required channel gain pairing threshold to harvest downlink throughput via the FD operation when compared to the HD case for each modulation scheme.

  8. Improved read disturb and write error rates in voltage-control spintronics memory (VoCSM) by controlling energy barrier height

    Science.gov (United States)

    Inokuchi, T.; Yoda, H.; Kato, Y.; Shimizu, M.; Shirotori, S.; Shimomura, N.; Koi, K.; Kamiguchi, Y.; Sugiyama, H.; Oikawa, S.; Ikegami, K.; Ishikawa, M.; Altansargai, B.; Tiwari, A.; Ohsawa, Y.; Saito, Y.; Kurobe, A.

    2017-06-01

    A hybrid writing scheme that combines the spin Hall effect and voltage-controlled magnetic-anisotropy effect is investigated in Ta/CoFeB/MgO/CoFeB/Ru/CoFe/IrMn junctions. The write current and control voltage are applied to Ta and CoFeB/MgO/CoFeB junctions, respectively. The critical current density required for switching the magnetization in CoFeB was modulated 3.6-fold by changing the control voltage from -1.0 V to +1.0 V. This modulation of the write current density is explained by the change in the surface anisotropy of the free layer from 1.7 mJ/m2 to 1.6 mJ/m2, which is caused by the electric field applied to the junction. The read disturb rate and write error rate, which are important performance parameters for memory applications, are drastically improved, and no error was detected in 5 × 108 cycles by controlling read and write sequences.

  9. Type I error rates and power of several versions of scaled chi-square difference tests in investigations of measurement invariance.

    Science.gov (United States)

    Brace, Jordan Campbell; Savalei, Victoria

    2017-09-01

    A Monte Carlo simulation study was conducted to investigate Type I error rates and power of several corrections for nonnormality to the normal theory chi-square difference test in the context of evaluating measurement invariance via structural equation modeling. Studied statistics include the uncorrected difference test, D ML , Satorra and Bentler's (2001) original correction, D SB1 , Satorra and Bentler's (2010) strictly positive correction, D SB10 , and a hybrid procedure, D SBH (Asparouhov & Muthén, 2013). Multiple-group data were generated from confirmatory factor analytic population models invariant on all parameters, or lacking invariance on residual variances, indicator intercepts, or factor loadings. Conditions varied in terms of the number of indicators associated with each factor in the population model, the location of noninvariance (if any), sample size, sample size ratio in the 2 groups, and nature of nonnormality. Type I error rates and power of corrected statistics were evaluated for a series of 4 nested invariance models. Overall, the strictly positive correction, D SB10 , is the best and most consistently performing statistic, as it was found to be much less sensitive than the original correction, D SB1 , to model size and sample evenness. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Autonomously stabilized entanglement between two superconducting quantum bits

    Science.gov (United States)

    Shankar, S.; Hatridge, M.; Leghtas, Z.; Sliwa, K. M.; Narla, A.; Vool, U.; Girvin, S. M.; Frunzio, L.; Mirrahimi, M.; Devoret, M. H.

    2013-12-01

    Quantum error correction codes are designed to protect an arbitrary state of a multi-qubit register from decoherence-induced errors, but their implementation is an outstanding challenge in the development of large-scale quantum computers. The first step is to stabilize a non-equilibrium state of a simple quantum system, such as a quantum bit (qubit) or a cavity mode, in the presence of decoherence. This has recently been accomplished using measurement-based feedback schemes. The next step is to prepare and stabilize a state of a composite system. Here we demonstrate the stabilization of an entangled Bell state of a quantum register of two superconducting qubits for an arbitrary time. Our result is achieved using an autonomous feedback scheme that combines continuous drives along with a specifically engineered coupling between the two-qubit register and a dissipative reservoir. Similar autonomous feedback techniques have been used for qubit reset, single-qubit state stabilization, and the creation and stabilization of states of multipartite quantum systems. Unlike conventional, measurement-based schemes, the autonomous approach uses engineered dissipation to counteract decoherence, obviating the need for a complicated external feedback loop to correct errors. Instead, the feedback loop is built into the Hamiltonian such that the steady state of the system in the presence of drives and dissipation is a Bell state, an essential building block for quantum information processing. Such autonomous schemes, which are broadly applicable to a variety of physical systems, as demonstrated by the accompanying paper on trapped ion qubits, will be an essential tool for the implementation of quantum error correction.

  11. An Implementation of Error Minimization Data Transmission in OFDM using Modified Convolutional Code

    Directory of Open Access Journals (Sweden)

    Hendy Briantoro

    2016-04-01

    Full Text Available This paper presents about error minimization in OFDM system. In conventional system, usually using channel coding such as BCH Code or Convolutional Code. But, performance BCH Code or Convolutional Code is not good in implementation of OFDM System. Error bits of OFDM system without channel coding is 5.77%. Then, we used convolutional code with code rate 1/2, it can reduce error bitsonly up to 3.85%. So, we proposed OFDM system with Modified Convolutional Code. In this implementation, we used Software Define Radio (SDR, namely Universal Software Radio Peripheral (USRP NI 2920 as the transmitter and receiver. The result of OFDM system using Modified Convolutional Code with code rate is able recover all character received so can decrease until 0% error bit. Increasing performance of Modified Convolutional Code is about 1 dB in BER of 10-4 from BCH Code and Convolutional Code. So, performance of Modified Convolutional better than BCH Code or Convolutional Code. Keywords: OFDM, BCH Code, Convolutional Code, Modified Convolutional Code, SDR, USRP

  12. Research on unequal error protection with punctured turbo codes in jpeg image transmission system

    Directory of Open Access Journals (Sweden)

    Lakhdar Moulay A.

    2007-01-01

    Full Text Available An investigation of Unequal Error Protection (UEP methods applied to JPEG image transmission using turbo codes is presented. The JPEG image is partitioned into two groups, i.e., DC components and AC components according to their respective sensitivity to channel noise. The highly sensitive DC components are better protected with a lower coding rate, while the less sensitive AC components use a higher coding rate. While we use the s-random interleaver and s-random odd-even interleaver combined with odd-even puncturing, we can fix easily the local rate of turbo-code. We propose to modify the design of s-random interleaver to fix the number of parity bits. A new UEP scheme for the Soft Output Viterbi Algorithm (SOVA is also proposed to improve the performances in terms of Bit Error Rate (BER and Peak Signal to Noise Ratio (PSNR. Simulation results are given to demonstrate how the UEP schemes outperforms the equal error protection (EEP scheme in terms of BER and PSNR.

  13. Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMS

    International Nuclear Information System (INIS)

    Diehl, S.E.; Ochoa, A. Jr.; Dressendorfer, P.V.; Koga, R.; Kolasinski, W.A.

    1982-06-01

    Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors

  14. Error-resilient design of high-fidelity scalable audio coding

    Science.gov (United States)

    Yang, Dai; Ai, Hongmei; Kyriakakis, Christos; Kuo, C.-C. Jay

    2002-06-01

    Current high quality audio coding techniques mainly focus on coding efficiency, which makes them extremely sensitive to channel noise, especially in high error rate wireless channels. In our previous work, we developed a progressive high quality audio codec, which was shown to outperform MPEG-4 version 2's scalable audio codec. In this work, we extend the error-free progressive audio codec to an error-resilient scalable audio codec by re-organizing the bitstream and modifying the noiseless coding module. A dynamic segmentation scheme is used to divide an audio bitstream into several segments. Each segment contains independently decodable data so that errors will not propagate across segment boundaries. An unequal error protection scheme is then adopted to improve error resilience of the final bitstream. The performance of the proposed algorithm is tested under different error patterns of WCDMA channels with several test audio materials. Our experimental results show that the proposed approach achieves excellent error resilience at a regular user bit rate of 64 kb/s.

  15. Fast optical signal processing in high bit rate OTDM systems

    DEFF Research Database (Denmark)

    Poulsen, Henrik Nørskov; Jepsen, Kim Stokholm; Clausen, Anders

    1998-01-01

    As all-optical signal processing is maturing, optical time division multiplexing (OTDM) has also gained interest for simple networking in high capacity backbone networks. As an example of a network scenario we show an OTDM bus interconnecting another OTDM bus, a single high capacity user represen...

  16. Low Bit Rate Video Coding | Mishra | Nigerian Journal of Technology

    African Journals Online (AJOL)

    Nigerian Journal of Technology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 32, No 3 (2013) >. Log in or Register to get access to full text downloads. Username, Password, Remember me, or Register · Download this PDF file. The PDF file you selected should ...

  17. Biometric Quantization through Detection Rate Optimized Bit Allocation

    NARCIS (Netherlands)

    Chen, C.; Veldhuis, Raymond N.J.; Kevenaar, T.A.M.; Akkermans, A.H.M.

    2009-01-01

    Extracting binary strings from real-valued biometric templates is a fundamental step in many biometric template protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Previous work has been focusing on the design of optimal quantization and coding for

  18. High bit rate optical transmission using midspan spectral inversion ...

    African Journals Online (AJOL)

    also known under (Optical Phase. Conjugation "OPC") compensates at the same time the chromatic dispersion and the. SPM [8-13]. The principle of MSSI is the spectral inversion of the spectrum of optical signal, and it is placed in the middle of the transmission span [7-8]. In the first half of the span, the signal disperses, thus.

  19. A novel bit-quad-based Euler number computing algorithm.

    Science.gov (United States)

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.

  20. Optimized H.264/AVC-Based Bit Stream Switching for Mobile Video Streaming

    Directory of Open Access Journals (Sweden)

    Liebl Günther

    2006-01-01

    Full Text Available We show the suitability of H.264/MPEG-4 AVC extended profile for wireless video streaming applications. In particular, we exploit the advanced bit stream switching capabilities using SP/SI pictures defined in the H.264/MPEG-4 AVC standard. For both types of switching pictures, optimized encoders are developed. We introduce a framework for dynamic switching and frame scheduling. For this purpose we define an appropriate abstract representation for media encoded for video streaming, as well as for the characteristics of wireless variable bit rate channels. The achievable performance gains over H.264/MPEG-4 AVC with constant bit rate (CBR encoding are shown for wireless video streaming over enhanced GPRS (EGPRS.

  1. Medication Errors

    Science.gov (United States)

    ... for You Agency for Healthcare Research and Quality: Medical Errors and Patient Safety Centers for Disease Control and ... Quality Chasm Series National Coordinating Council for Medication Error Reporting and Prevention ... Devices Radiation-Emitting Products Vaccines, Blood & Biologics Animal & ...

  2. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM

    Directory of Open Access Journals (Sweden)

    Nicolas Haverkamp

    2017-10-01

    Full Text Available We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM with an unstructured covariance matrix (MLM-UN, MLM with compound-symmetry (MLM-CS and for repeated measures analysis of variance (rANOVA models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes (n = 20 and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement

  3. Dissociation rate of cognate peptidyl-tRNA from the A-site of hyper-accurate and error-prone ribosomes.

    Science.gov (United States)

    Karimi, R; Ehrenberg, M

    1994-12-01

    The binding stability of the aminoacyl-tRNA site (A-site), estimated from the dissociation rate constant kd, of AcPhe-Phe-tRNA(Phe) has been studied for wild-type (wt), for hyperaccurate ribosomes altered in S12 [streptomycin-dependent (SmD) and streptomycin-pseudodependent (SmP) phenotypes], for error-prone ribosomes altered in S4 (Ram phenotype), and for ribosomes in complex with the error-inducing aminoglycosides streptomycin and neomycin. The AcPhe2-tRNA stability is slightly and identically reduced for SmD and SmP phenotypes in relation to wt ribosomes. The stability is increased (kd is reduced) for Ram ribosomes to about the same extent as the proof-reading accuracy is decreased for this phenotype. kd is also reduced by the action of streptomycin and neomycin, but much less than the reduction in proof-reading accuracy induced by streptomycin. Similar kd values for SmD and SmP ribosomes indicate that the cause of streptomycin dependence is not excessive drop-off of peptidyl-tRNAs from the A-site.

  4. The human error rate assessment and optimizing system HEROS - a new procedure for evaluating and optimizing the man-machine interface in PSA

    International Nuclear Information System (INIS)

    Richei, A.; Hauptmanns, U.; Unger, H.

    2001-01-01

    A new procedure allowing the probabilistic evaluation and optimization of the man-machine system is presented. This procedure and the resulting expert system HEROS, which is an acronym for Human Error Rate Assessment and Optimizing System, is based on the fuzzy set theory. Most of the well-known procedures employed for the probabilistic evaluation of human factors involve the use of vague linguistic statements on performance shaping factors to select and to modify basic human error probabilities from the associated databases. This implies a large portion of subjectivity. Vague statements are expressed here in terms of fuzzy numbers or intervals which allow mathematical operations to be performed on them. A model of the man-machine system is the basis of the procedure. A fuzzy rule-based expert system was derived from ergonomic and psychological studies. Hence, it does not rely on a database, whose transferability to situations different from its origin is questionable. In this way, subjective elements are eliminated to a large extent. HEROS facilitates the importance analysis for the evaluation of human factors, which is necessary for optimizing the man-machine system. HEROS is applied to the analysis of a simple diagnosis of task of the operating personnel in a nuclear power plant

  5. Entangled solitons and stochastic Q-bits

    International Nuclear Information System (INIS)

    Rybakov, Yu.P.; Kamalov, T.F.

    2007-01-01

    Stochastic realization of the wave function in quantum mechanics with the inclusion of soliton representation of extended particles is discussed. Two-solitons configurations are used for constructing entangled states in generalized quantum mechanics dealing with extended particles, endowed with nontrivial spin S. Entangled solitons construction being introduced in the nonlinear spinor field model, the Einstein-Podolsky-Rosen (EPR) correlation is calculated and shown to coincide with the quantum mechanical one for the 1/2-spin particles. The concept of stochastic q-bits is used for quantum computing modelling

  6. Object tracking based on bit-planes

    Science.gov (United States)

    Li, Na; Zhao, Xiangmo; Liu, Ying; Li, Daxiang; Wu, Shiqian; Zhao, Feng

    2016-01-01

    Visual object tracking is one of the most important components in computer vision. The main challenge for robust tracking is to handle illumination change, appearance modification, occlusion, motion blur, and pose variation. But in surveillance videos, factors such as low resolution, high levels of noise, and uneven illumination further increase the difficulty of tracking. To tackle this problem, an object tracking algorithm based on bit-planes is proposed. First, intensity and local binary pattern features represented by bit-planes are used to build two appearance models, respectively. Second, in the neighborhood of the estimated object location, a region that is most similar to the models is detected as the tracked object in the current frame. In the last step, the appearance models are updated with new tracking results in order to deal with environmental and object changes. Experimental results on several challenging video sequences demonstrate the superior performance of our tracker compared with six state-of-the-art tracking algorithms. Additionally, our tracker is more robust to low resolution, uneven illumination, and noisy video sequences.

  7. Physical Roots of It from Bit

    Science.gov (United States)

    Berezin, Alexander A.

    2003-04-01

    Why there is Something rather than Nothing? From Pythagoras ("everything is number") to Wheeler ("it from bit") theme of ultimate origin stresses primordiality of Ideal Platonic World (IPW) of mathematics. Even popular "quantum tunnelling out of nothing" can specify "nothing" only as (essentially) IPW. IPW exists everywhere (but nowhere in particular) and logically precedes space, time, matter or any "physics" in any conceivable universe. This leads to propositional conjecture (axiom?) that (meta)physical "Platonic Pressure" of infinitude of numbers acts as engine for self-generation of physical universe directly out of mathematics: cosmogenesis is driven by the very fact of IPW inexhaustibility. While physics in other quantum branches of inflating universe (Megaverse)can be(arbitrary) different from ours, number theory (and rest of IPW)is not (it is unique, absolute, immutable and infinitely resourceful). Let (infinite) totality of microstates ("its") of entire Megaverse form countable set. Since countable sets are hierarchically inexhaustible (Cantor's "fractal branching"), each single "it" still has infinite tail of non-overlapping IPW-based "personal labels". Thus, each "bit" ("it") is infinitely and uniquely resourceful: possible venue of elimination ergodicity basis for eternal return cosmological argument. Physics (in any subuniverse) may be limited only by inherent impossibilities residing in IPW, e.g. insolvability of Continuum Problem may be IPW foundation of quantum indeterminicity.

  8. Source-optimized irregular repeat accumulate codes with inherent unequal error protection capabilities and their application to scalable image transmission.

    Science.gov (United States)

    Lan, Ching-Fu; Xiong, Zixiang; Narayanan, Krishna R

    2006-07-01

    The common practice for achieving unequal error protection (UEP) in scalable multimedia communication systems is to design rate-compatible punctured channel codes before computing the UEP rate assignments. This paper proposes a new approach to designing powerful irregular repeat accumulate (IRA) codes that are optimized for the multimedia source and to exploiting the inherent irregularity in IRA codes for UEP. Using the end-to-end distortion due to the first error bit in channel decoding as the cost function, which is readily given by the operational distortion-rate function of embedded source codes, we incorporate this cost function into the channel code design process via density evolution and obtain IRA codes that minimize the average cost function instead of the usual probability of error. Because the resulting IRA codes have inherent UEP capabilities due to irregularity, the new IRA code design effectively integrates channel code optimization and UEP rate assignments, resulting in source-optimized channel coding or joint source-channel coding. We simulate our source-optimized IRA codes for transporting SPIHT-coded images over a binary symmetric channel with crossover probability p. When p = 0.03 and the channel code length is long (e.g., with one codeword for the whole 512 x 512 image), we are able to operate at only 9.38% away from the channel capacity with code length 132380 bits, achieving the best published results in terms of average peak signal-to-noise ratio (PSNR). Compared to conventional IRA code design (that minimizes the probability of error) with the same code rate, the performance gain in average PSNR from using our proposed source-optimized IRA code design is 0.8759 dB when p = 0.1 and the code length is 12800 bits. As predicted by Shannon's separation principle, we observe that this performance gain diminishes as the code length increases.

  9. Video steganography based on bit-plane decomposition of wavelet-transformed video

    Science.gov (United States)

    Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji

    2004-06-01

    This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.

  10. A 10-bit 100 MSamples/s BiCMOS D/A Converter

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Herald Holger; Tunheim, Svein Anders

    1997-01-01

    This paper presents a 10-bit Digital-to-Analogue Converter (DAC) based on the current steering principle. The DAC is processed in a 0.8 micron BiCMOS process and is designed to operate at a sampling rate of 100MSamples/s. The DAC is intended for applications using direct digital synthesis...

  11. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  12. Cross Institutional Cooperation on a Shared Bit Repository

    DEFF Research Database (Denmark)

    Zierau, Eld; Kejser, Ulla Bøgvad

    2013-01-01

    This paper explores how independent institutions, such as archives and libraries, can cooperate on managing a shared bit repository with bit preservation, in order to use their resources for preservation in a more cost-effective way. It uses the OAIS Reference Model to provide a framework...... for systematically analysing institutions technical and organisational requirements for a remote bit repository. Instead of viewing a bit repository simply as Archival Storage for the institutions repositories, we argue for viewing it as consisting of a subset of functions from all entities defined by the OAIS...... Reference Model. The work is motivated by and used in a current Danish feasibility study for establishing a national bit repository. The study revealed that depending on their missions and the collections they hold, the institutions have varying requirements e.g. for bit safety, accessibility...

  13. Logic Operators on Delta-Sigma Bit-Streams

    Directory of Open Access Journals (Sweden)

    Axel Klein

    2018-01-01

    Full Text Available The fundamental logic operations NOT, OR, AND, and XOR processing bit-streams of Δ Σ -modulators are discussed herein. The resulting bit-streams are evaluated on the basis of their mean values and their standard deviations. Mathematical expressions are presented for their mean values; i.e., the logic function XOR results in the negative multiplication of two bipolar bit-streams, and the logic function AND results in the multiplication of two unipolar bit-streams. As the results are valid for bit-streams with independent high-frequency components, the normed cross-product is utilized for evaluation of the independence of the high-frequency components. In order to achieve a high independence between the input bit-streams, representing the same value, the quantization noise is affected. Multiple strategies are examined and Δ Σ -modulators with different designs are chosen as the best-suited solution. The operations are evaluated on a testbench.

  14. Human in vitro oocyte maturation is not associated with increased imprinting error rates at LIT1, SNRPN, PEG3 and GTL2.

    Science.gov (United States)

    Kuhtz, J; Romero, S; De Vos, M; Smitz, J; Haaf, T; Anckaert, E

    2014-09-01

    Does in vitro maturation (IVM) of cumulus-enclosed germinal vesicle (GV) stage oocytes retrieved from small antral follicles in minimally stimulated cycles without an ovulatory hCG dose induce imprinting errors at LIT1, SNRPN, PEG3 and GTL2 in human oocytes? There is no significant increase in imprinting mutations at LIT1, SNRPN, PEG3 and GTL2 after IVM of cumulus-enclosed GV oocytes from small antral follicles in minimally stimulated cycles without hCG priming. Animal models have generally demonstrated correct methylation imprint establishment for in vitro grown and matured oocytes. For human IVM, well-designed studies allowing conclusions on imprint establishment are currently not available. Immature oocyte-cumulus complexes from 2 to 9 mm follicles were retrieved in polycystic ovary syndrome (PCOS) subjects in minimally stimulated cycles without hCG priming and matured in vitro. In vivo grown oocytes were retrieved after conventional ovarian stimulation for IVF/ICSI or after ovulation induction. Imprinting error rates at three maternally methylated (LIT1, SNRPN and PEG3) and one paternally methylated (GTL2) imprinted genes were compared in 71 in vitro and 38 in vivo matured oocytes. The limiting dilution bisulfite sequencing technique was applied, allowing increased sensitivity based on multiplex PCR for the imprinted genes and the inclusion of non-imprinted marker genes for cumulus cell DNA contamination. In vitro as well as in vivo matured oocytes showed only a few abnormal alleles, consistent with epimutations. The abnormalities were more frequent in immature than in mature oocytes for both groups, although no significant difference was reached. There was no statistically significant increase in imprinting errors in IVM oocytes. This single cell methylation analysis was restricted to a number of well-selected imprinted genes. Genome-wide methylation analysis of single human oocytes is currently not possible. IVM is a patient-friendly alternative to

  15. Exploiting temporal correlation of speech for error robust and bandwidth flexible distributed speech recognition

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Dalsgaard, Paul; Lindberg, Børge

    2007-01-01

    In this paper the temporal correlation of speech is exploited in front-end feature extraction, client based error recovery and server based error concealment (EC) for distributed speech recognition. First, the paper investigates a half frame rate (HFR) front-end that uses double frame shifting...... and concealment is conducted at the sub-vector level as opposed to conventional techniques where an entire vector is replaced even though only a single bit error occurs. The sub-vector EC is further combined with weighted Viterbi decoding. Encouraging recognition results are observed for the proposed techniques....... Lastly, to understand the effects of applying various EC techniques, this paper introduces three approaches consisting of speech feature, dynamic programming distance and hidden Markov model state duration comparison....

  16. On the BER and capacity analysis of MIMO MRC systems with channel estimation error

    KAUST Repository

    Yang, Liang

    2011-10-01

    In this paper, we investigate the effect of channel estimation error on the capacity and bit-error rate (BER) of a multiple-input multiple-output (MIMO) transmit maximal ratio transmission (MRT) and receive maximal ratio combining (MRC) systems over uncorrelated Rayleigh fading channels. We first derive the ergodic (average) capacity expressions for such systems when power adaptation is applied at the transmitter. The exact capacity expression for the uniform power allocation case is also presented. Furthermore, to investigate the diversity order of MIMO MRT-MRC scheme, we derive the BER performance under a uniform power allocation policy. We also present an asymptotic BER performance analysis for the MIMO MRT-MRC system with multiuser diversity. The numerical results are given to illustrate the sensitivity of the main performance to the channel estimation error and the tightness of the approximate cutoff value. © 2011 IEEE.

  17. A Fast Dynamic 64-bit Comparator with Small Transistor Count

    Directory of Open Access Journals (Sweden)

    Chua-Chin Wang

    2002-01-01

    Full Text Available In this paper, we propose a 64-bit fast dynamic CMOS comparator with small transistor count. Major features of the proposed comparator are the rearrangement and re-ordering of transistors in the evaluation block of a dynamic cell, and the insertion of a weak n feedback inverter, which helps the pull-down operation to ground. The simulation results given by pre-layout tools, e.g. HSPICE, and post-layout tools, e.g. TimeMill, reveal that the delay is around 2.5 ns while the operating clock rate reaches 100 MHz. A physical chip is fabricated to verify the correctness of our design by using UMC (United Microelectronics Company 0.5 μm (2P2M technology.

  18. A low power 12-bit ADC for nuclear instrumentation

    International Nuclear Information System (INIS)

    Adachi, R.; Landis, D.; Madden, N.; Silver, E.; LeGros, M.

    1992-10-01

    A low power, successive approximation, analog-to-digital converter (ADC) for low rate, low cost, battery powered applications is described. The ADC is based on a commercial 50 mW successive approximation CMOS device (CS5102). An on-chip self-calibration circuit reduces the inherent differential nonlinearity to 7%. A further reduction of the differential nonlinearity to 0.5% is attained with a four bit Gatti function. The Gatti function is distributed to minimize battery power consumption. All analog functions reside with the ADC while the noisy digital functions reside in the personal computer based histogramming memory. Fiber optic cables carry afl digital information between the ADC and the personal computer based histogramming memory

  19. Development and testing of a Mudjet-augmented PDC bit.

    Energy Technology Data Exchange (ETDEWEB)

    Black, Alan (TerraTek, Inc.); Chahine, Georges (DynaFlow, Inc.); Raymond, David Wayne; Matthews, Oliver (Security DBS); Grossman, James W.; Bertagnolli, Ken (US Synthetic); Vail, Michael (US Synthetic)

    2006-01-01

    This report describes a project to develop technology to integrate passively pulsating, cavitating nozzles within Polycrystalline Diamond Compact (PDC) bits for use with conventional rig pressures to improve the rock-cutting process in geothermal formations. The hydraulic horsepower on a conventional drill rig is significantly greater than that delivered to the rock through bit rotation. This project seeks to leverage this hydraulic resource to extend PDC bits to geothermal drilling.

  20. Bit-commitment-based quantum coin flipping

    International Nuclear Information System (INIS)

    Nayak, Ashwin; Shor, Peter

    2003-01-01

    In this paper we focus on a special framework for quantum coin-flipping protocols, bit-commitment-based protocols, within which almost all known protocols fit. We show a lower bound of 1/16 for the bias in any such protocol. We also analyze a sequence of multiround protocols that tries to overcome the drawbacks of the previously proposed protocols in order to lower the bias. We show an intricate cheating strategy for this sequence, which leads to a bias of 1/4. This indicates that a bias of 1/4 might be optimal in such protocols, and also demonstrates that a more clever proof technique may be required to show this optimality

  1. Quantum bit commitment with cheat sensitive binding and approximate sealing

    Science.gov (United States)

    Li, Yan-Bing; Xu, Sheng-Wei; Huang, Wei; Wan, Zong-Jie

    2015-04-01

    This paper proposes a cheat-sensitive quantum bit commitment scheme based on single photons, in which Alice commits a bit to Bob. Here, Bob’s probability of success at cheating as obtains the committed bit before the opening phase becomes close to \\frac{1}{2} (just like performing a guess) as the number of single photons used is increased. And if Alice alters her committed bit after the commitment phase, her cheating will be detected with a probability that becomes close to 1 as the number of single photons used is increased. The scheme is easy to realize with present day technology.

  2. Medical error

    African Journals Online (AJOL)

    QuickSilver

    is only when mistakes are recognised that learning can occur...All our previous medical training has taught us to fear error, as error is associated with blame. This fear may lead to concealment and this is turn can lead to fraud'. How real this fear is! All of us, during our medical training, have had the maxim 'prevention is.

  3. Détection homodyne pour mémoires holographiques à stockage bit à bit

    Science.gov (United States)

    Maire, G.; Pauliat, G.; Roosen, G.

    2006-10-01

    Les mémoires holographiques à stockage bit à bit sont une alternative intéressante à l'approche holographique conventionnelle par pages de données du fait de leur architecture optique simplifiée. Nous proposons et validons ici une procédure de lecture adaptée à de telles mémoires et basée sur une détection homodyne de l'amplitude diffractée par les hologrammes. Ceci permet d'augmenter la quantité de signal utile détecté et s'avère donc prometteur pour accroître le taux de transfert de données de ces mémoires.

  4. The Deliverability of the BIT Programme at Lahti UAS in Training BIT Experts

    OpenAIRE

    Nghiem, Duc Long

    2014-01-01

    Information Technology has become a vital and indispensable part of business in every industry. In fact, IT is the primary factor that differentiates many businesses from their competitors. Organizations usually rely on IT for several strategic business solutions such as communication, information management, customer relationship management, and marketing. In the near future, the business labor force will anticipate a rising demand in BIT experts who possess both business expertise and IT sk...

  5. Multiple Memory Structure Bit Reversal Algorithm Based on Recursive Patterns of Bit Reversal Permutation

    Directory of Open Access Journals (Sweden)

    K. K. L. B. Adikaram

    2014-01-01

    Full Text Available With the increasing demand for online/inline data processing efficient Fourier analysis becomes more and more relevant. Due to the fact that the bit reversal process requires considerable processing time of the Fast Fourier Transform (FFT algorithm, it is vital to optimize the bit reversal algorithm (BRA. This paper is to introduce an efficient BRA with multiple memory structures. In 2009, Elster showed the relation between the first and the second halves of the bit reversal permutation (BRP and stated that it may cause serious impact on cache performance of the computer, if implemented. We found exceptions, especially when the said index mapping was implemented with multiple one-dimensional memory structures instead of multidimensional or one-dimensional memory structure. Also we found a new index mapping, even after the recursive splitting of BRP into equal sized slots. The four-array and the four-vector versions of BRA with new index mapping reported 34% and 16% improvement in performance in relation to similar versions of Linear BRA of Elster which uses single one-dimensional memory structure.

  6. An 8-bit 100-MS/s digital-to-skew converter embedded switch with a 200-ps range for time-interleaved sampling

    Science.gov (United States)

    Xiaoshi, Zhu; Chixiao, Chen; Jialiang, Xu; Fan, Ye; Junyan, Ren

    2013-03-01

    A sampling switch with an embedded digital-to-skew converter (DSC) is presented. The proposed switch eliminates time-interleaved ADCs' skews by adjusting the boosted voltage. A similar bridged capacitors' charge sharing structure is used to minimize the area. The circuit is fabricated in a 0.18 μm CMOS process and achieves sub-1 ps resolution and 200 ps timing range at a rate of 100 MS/s. The power consumption is 430 μW at maximum. The measurement result also includes a 2-channel 14-bit 100 MS/s time-interleaved ADCs (TI-ADCs) with the proposed DSC switch's demonstration. This scheme is widely applicable for the clock skew and aperture error calibration demanded in TI-ADCs and SHA-less ADCs.

  7. Synchronization of random bit generators based on coupled chaotic lasers and application to cryptography.

    Science.gov (United States)

    Kanter, Ido; Butkovski, Maria; Peleg, Yitzhak; Zigzag, Meital; Aviad, Yaara; Reidler, Igor; Rosenbluh, Michael; Kinzel, Wolfgang

    2010-08-16

    Random bit generators (RBGs) constitute an important tool in cryptography, stochastic simulations and secure communications. The later in particular has some difficult requirements: high generation rate of unpredictable bit strings and secure key-exchange protocols over public channels. Deterministic algorithms generate pseudo-random number sequences at high rates, however, their unpredictability is limited by the very nature of their deterministic origin. Recently, physical RBGs based on chaotic semiconductor lasers were shown to exceed Gbit/s rates. Whether secure synchronization of two high rate physical RBGs is possible remains an open question. Here we propose a method, whereby two fast RBGs based on mutually coupled chaotic lasers, are synchronized. Using information theoretic analysis we demonstrate security against a powerful computational eavesdropper, capable of noiseless amplification, where all parameters are publicly known. The method is also extended to secure synchronization of a small network of three RBGs.

  8. Progressive and Error-Resilient Transmission Strategies for VLC Encoded Signals over Noisy Channels

    Directory of Open Access Journals (Sweden)

    Guillemot Christine

    2006-01-01

    Full Text Available This paper addresses the issue of robust and progressive transmission of signals (e.g., images, video encoded with variable length codes (VLCs over error-prone channels. This paper first describes bitstream construction methods offering good properties in terms of error resilience and progressivity. In contrast with related algorithms described in the literature, all proposed methods have a linear complexity as the sequence length increases. The applicability of soft-input soft-output (SISO and turbo decoding principles to resulting bitstream structures is investigated. In addition to error resilience, the amenability of the bitstream construction methods to progressive decoding is considered. The problem of code design for achieving good performance in terms of error resilience and progressive decoding with these transmission strategies is then addressed. The VLC code has to be such that the symbol energy is mainly concentrated on the first bits of the symbol representation (i.e., on the first transitions of the corresponding codetree. Simulation results reveal high performance in terms of symbol error rate (SER and mean-square reconstruction error (MSE. These error-resilience and progressivity properties are obtained without any penalty in compression efficiency. Codes with such properties are of strong interest for the binarization of -ary sources in state-of-the-art image, and video coding systems making use of, for example, the EBCOT or CABAC algorithms. A prior statistical analysis of the signal allows the construction of the appropriate binarization code.

  9. Rating

    OpenAIRE

    Karas, Vladimír

    2006-01-01

    Charakteristika ratingu. Dělení a druhy ratingu (rating emise × rating emitenta; dlouhodobý rating × krátkodobý rating; mezinárodní rating × lokální rating). Obecné požadavky kladené na rating. Proces tvorby ratingu. Vyžádaný rating. Nevyžádaný rating. Ratingový proces na bázi volně přístupných informací. Uplatňované ratingové systémy. Ratingová kriteria. Využití a interpretace ratingové známky. Funkce ratingu. Rating v souvislosti s BASEL II. Rating v souvislosti s hospodářskými krizemi....

  10. Support research for development of improved geothermal drill bits

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, R.R.; Barker, L.M.; Green, S.J.; Winzenried, R.W.

    1977-06-01

    Progress in background research needed to develop drill bits for the geothermal environment is reported. Construction of a full-scale geothermal wellbore simulator and geothermal seal testing machine was completed. Simulated tests were conducted on full-scale bits. Screening tests on elastometric seals under geothermal conditions are reported. (JGB)

  11. Cross Institutional Cooperation on a Shared Bit Repository

    DEFF Research Database (Denmark)

    Zierau, Eld; Kejser, Ulla Bøgvad

    2010-01-01

    This paper explores how independent institutions, such as archives and libraries, can cooperate on managing a shared bit repository with bit preservation in order to use their resources for preservation n in a more cost-effective way. It uses the OAIS Reference Model to provide a framework...

  12. APL portability in 16 bits microprocessors

    International Nuclear Information System (INIS)

    Cordova Costa, Felisa

    1981-01-01

    The present work deals with an automatic program translation method as a solution to the software portability problem. The source machine is a minicomputer of the SEMS MITRA range; the target machines are three 16 bits microprocessors: INTEL 8086, MOTOROLA 68000 and ZILOG Z-8000. The software to be translated is written in macro-assembly language (MAS) and consist of an operating system, an APL interpreter and some other software tools. The translation method uses a machine-free intermediate language describing the program in source language. This intermediate language consisting of a set of macro-instructions, is then assembled using a link library; this library defines the macro-instructions which create the target microprocessor object code. The whole translation operation work is carried out by the source machine which produces, after linkage editing, a table memory map (IME). Thereafter the load object code will be removed to target machine. Concerning optimization problems or inputs-outputs, some module can be written using the target machine assembly language and processed by a specific assembler in target machine or source machine, if the latter processes a cross-assembler; then the resulting binary codes are merged with the binary codes issued during the automatic translation phase. The method proposed here may be extended to any 16 bits computer, by a simple change of the macro-instruction library. This work allows an APL machine creation with microprocessors, preserving the original software and so maintaining its initial reliability. This work has led to a closer examination of hardware problems connected with the various target machines configurations. Difficulties met during this work mainly arise from different operations of the target machines specially indicators or flags setting, addressing modes and interruption mechanisms. This shows up the necessity to design new microprocessors either partially user's micro-programmable, or with some functions

  13. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  14. A Novel Least Significant Bit First Processing Parallel CRC Circuit

    Directory of Open Access Journals (Sweden)

    Xiujie Qu

    2013-01-01

    Full Text Available In HDLC serial communication protocol, CRC calculation can first process the most or least significant bit of data. Nowadays most CRC calculation is based on the most significant bit (MSB first processing. An algorithm of the least significant bit (LSB first processing parallel CRC is proposed in this paper. Based on the general expression of the least significant bit first processing serial CRC, using state equation method of linear system, we derive a recursive formula by the mathematical deduction. The recursive formula is applicable to any number of bits processed in parallel and any series of generator polynomial. According to the formula, we present the parallel circuit of CRC calculation and implement it with VHDL on FPGA. The results verify the accuracy and effectiveness of this method.

  15. A Memristor as Multi-Bit Memory: Feasibility Analysis

    Directory of Open Access Journals (Sweden)

    O. Bass

    2015-06-01

    Full Text Available The use of emerging memristor materials for advanced electrical devices such as multi-valued logic is expected to outperform today's binary logic digital technologies. We show here an example for such non-binary device with the design of a multi-bit memory. While conventional memory cells can store only 1 bit, memristors-based multi-bit cells can store more information within single device thus increasing the information storage density. Such devices can potentially utilize the non-linear resistance of memristor materials for efficient information storage. We analyze the performance of such memory devices based on their expected variations in order to determine the viability of memristor-based multi-bit memory. A design of read/write scheme and a simple model for this cell, lay grounds for full integration of memristor multi-bit memory cell.

  16. IMAGE STEGANOGRAPHY DENGAN METODE LEAST SIGNIFICANT BIT (LSB

    Directory of Open Access Journals (Sweden)

    M. Miftakul Amin

    2014-02-01

    Full Text Available Security in delivering a secret message is an important factor in the spread of information in cyberspace. Protecting that message to be delivered to the party entitled to, should be made a message concealment mechanism. The purpose of this study was to hide a secret text message into digital images in true color 24 bit RGB format. The method used to insert a secret message using the LSB (Least Significant Bit by replacing the last bit or 8th bit in each RGB color component. RGB image file types option considering that messages can be inserted capacity greater than if use a grayscale image, this is because in one pixel can be inserted 3 bits message. Tests provide results that are hidden messages into a digital image does not reduce significantly the quality of the digital image, and the message has been hidden can be extracted again, so that messages can be delivered to the recipient safely.

  17. Installation of MCNP on 64-bit parallel computers

    International Nuclear Information System (INIS)

    Meginnis, A.B.; Hendricks, J.S.; McKinney, G.W.

    1995-01-01

    The Monte Carlo radiation transport code MCNP has been successfully ported to two 64-bit workstations, the SGI and DEC Alpha. We found the biggest problem for installation on these machines to be Fortran and C mismatches in argument passing. Correction of these mismatches enabled, for the first time, dynamic memory allocation on 64-bit workstations. Although the 64-bit hardware is faster because 8-bytes are processed at a time rather than 4-bytes, we found no speed advantage in true 64-bit coding versus implicit double precision when porting an existing code to the 64-bit workstation architecture. We did find that PVM multiasking is very successful and represents a significant performance enhancement for scientific workstations

  18. A little bit of legal history

    CERN Multimedia

    2010-01-01

    On Monday 18 October, a little bit of legal history will be made when the first international tripartite agreement between CERN and its two Host States is signed. This agreement, which has been under negotiation since 2004, clarifies the working conditions of people employed by companies contracted to CERN. It will facilitate the management of service contracts both for CERN and its contractors.   Ever since 1965, when CERN first crossed the border into France, the rule of territoriality has applied. This means that anyone working for a company contracted to CERN whose job involves crossing the border is subject to the employment legislation of both states. The new agreement simplifies matters by making only one legislation apply per contract, that of the country in which most of the work is carried out. This is good for CERN, it’s good for the companies, and it’s good for their employees. It is something that all three parties to the agreement have wanted for some time, and I...

  19. A perceptual-based approach to bit allocation for H.264 encoder

    Science.gov (United States)

    Ou, Tao-Sheng; Huang, Yi-Hsin; Chen, Homer H.

    2010-07-01

    Since the ultimate receivers of encoded video are human eyes, the characteristics of human visual system should be taken into consideration in the design of bit allocation to improve the perceptual video quality. In this paper, we incorporate the structural similarity index as a distortion metric and propose a novel rate-distortion model to characterize the relationship between rate and the structural similarity index. Based on the model, we develop an optimum bit allocation and rate control scheme for H.264 encoders. Experimental results show that up to 25% bitrate reduction over the JM reference software can be achieved. Subjective evaluation further confirms that the proposed scheme preserves more structural information and improves the perceptual quality of the encoded video.

  20. A 9pJ/bit SOP optical transceiver with 80 Gbps two-way bandwidth

    Science.gov (United States)

    Liu, Fengman; Li, Baoxia; Li, Zhihua; Wan, Lixi; Gao, Wei; Chu, Yanbiao; Du, Tianmin; Song, Jian; Xiang, Haifei; Wang, Haidong; Yang, Kun; Yang, Binbin

    2011-12-01

    The high-speed parallel optical transmitter module based on VCSEL/PD array, high-speed specialized integrated circuit, fiber array micro-optical components presents magnificent application and development potential. The coupling alignment between VCSEL/PD array and waveguide array has been reported using silicon optical bench (SiOB) [1-3]. In this paper, A passive coupling method based on SiOB and the packaging of the VCSEL/PD arrays are introduced; the coupling efficiency is about 80% with a misalignment tolerance of +/-15μm, optical cross talk is about -70dB. A silicon optical bench is fabricated as a platform for integrated photonic components. The thermal performance and electrical performance of optical sub-package is analyzed and optimized. A High density SOP Optical Transceiver is designed based on this coupling method. Signal integrity analysis and optimization of the high-density differential signal pairs on PCB is conducted and S parameters are extracted to optimize impedance and minimize the effects of discontinuity in the electrical channels. In addition, to suppress simultaneous switching noise (SSN) and optimize the target impedance, a novel embedded capacitor filter is used instead of the conventional power supply filter, the film capacitor measures 14μm in thickness, has a dielectric constant of 16 and a capacitance density of 1nF/cm2. The transceiver loop back diagram is shown; the bit error rate of the optical transceiver is tested loop back with a 231-1 PRBS pattern and found to be 10-12s-1 at 10Gbps while dissipating 90mW/channel.

  1. Ventilator-associated pneumonia: the influence of bacterial resistance, prescription errors, and de-escalation of antimicrobial therapy on mortality rates

    Directory of Open Access Journals (Sweden)

    Ana Carolina Souza-Oliveira

    2016-09-01

    Conclusion: Prescription errors influenced mortality of patients with Ventilator-associated pneumonia, underscoring the challenge of proper Ventilator-associated pneumonia treatment, which requires continuous reevaluation to ensure that clinical response to therapy meets expectations.

  2. Explaining quantitative variation in the rate of Optional Infinitive errors across languages: a comparison of MOSAIC and the Variational Learning Model.

    Science.gov (United States)

    Freudenthal, Daniel; Pine, Julian; Gobet, Fernand

    2010-06-01

    In this study, we use corpus analysis and computational modelling techniques to compare two recent accounts of the OI stage: Legate & Yang's (2007) Variational Learning Model and Freudenthal, Pine & Gobet's (2006) Model of Syntax Acquisition in Children. We first assess the extent to which each of these accounts can explain the level of OI errors across five different languages (English, Dutch, German, French and Spanish). We then differentiate between the two accounts by testing their predictions about the relation between children's OI errors and the distribution of infinitival verb forms in the input language. We conclude that, although both accounts fit the cross-linguistic patterning of OI errors reasonably well, only MOSAIC is able to explain why verbs that occur more frequently as infinitives than as finite verb forms in the input also occur more frequently as OI errors than as correct finite verb forms in the children's output.

  3. BetaBit: A fast generator of autocorrelated binary processes for geophysical research

    Science.gov (United States)

    Serinaldi, Francesco; Lombardo, Federico

    2017-05-01

    We introduce a fast and efficient non-iterative algorithm, called BetaBit, to simulate autocorrelated binary processes describing the occurrence of natural hazards, system failures, and other physical and geophysical phenomena characterized by persistence, temporal clustering, and low rate of occurrence. BetaBit overcomes the simulation constraints posed by the discrete nature of the marginal distributions of binary processes by using the link existing between the correlation coefficients of this process and those of the standard Gaussian processes. The performance of BetaBit is tested on binary signals with power-law and exponentially decaying autocorrelation functions (ACFs) corresponding to Hurst-Kolmogorov and Markov processes, respectively. An application to real-world sequences describing rainfall intermittency and the occurrence of strong positive phases of the North Atlantic Oscillation (NAO) index shows that BetaBit can also simulate surrogate data preserving the empirical ACF as well as signals with autoregressive moving average (ARMA) dependence structures. Extensions to cyclo-stationary processes accounting for seasonal fluctuations are also discussed.

  4. Bit Plane Coding based Steganography Technique for JPEG2000 Images and Videos

    Directory of Open Access Journals (Sweden)

    Geeta Kasana

    2016-02-01

    Full Text Available In this paper, a Bit Plane Coding (BPC based steganography technique for JPEG2000 images and Motion JPEG2000 video is proposed. Embedding in this technique is performed in the lowest significant bit planes of the wavelet coefficients of a cover image. In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression rate and are used in Tier-2 process of JPEG2000. In the proposed technique, Tier-1 and Tier-2 processes of JPEG2000 and Motion JPEG2000 are executed twice on the encoder side to collect the information about the lowest bit planes of all code blocks of a cover image, which is utilized in embedding and transmitted to the decoder. After embedding secret data, Optimal Pixel Adjustment Process (OPAP is applied on stego images to enhance its visual quality. Experimental results show that proposed technique provides large embedding capacity and better visual quality of stego images than existing steganography techniques for JPEG2000 compressed images and videos. Extracted secret image is similar to the original secret image.

  5. Modern X86 assembly language programming 32-bit, 64-bit, SSE, and AVX

    CERN Document Server

    Kusswurm, Daniel

    2014-01-01

    Modern X86 Assembly Language Programming shows the fundamentals of x86 assembly language programming. It focuses on the aspects of the x86 instruction set that are most relevant to application software development. The book's structure and sample code are designed to help the reader quickly understand x86 assembly language programming and the computational capabilities of the x86 platform. Major topics of the book include the following: 32-bit core architecture, data types, internal registers, memory addressing modes, and the basic instruction setX87 core architecture, register stack, special

  6. Integer Representations towards Efficient Counting in the Bit Probe Model

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Greve, Mark; Pandey, Vineet

    2011-01-01

    Abstract We consider the problem of representing numbers in close to optimal space and supporting increment, decrement, addition and subtraction operations efficiently. We study the problem in the bit probe model and analyse the number of bits read and written to perform the operations, both...... in the worst-case and in the average-case. A counter is space-optimal if it represents any number in the range [0,...,2 n  − 1] using exactly n bits. We provide a space-optimal counter which supports increment and decrement operations by reading at most n − 1 bits and writing at most 3 bits in the worst......-case. To the best of our knowledge, this is the first such representation which supports these operations by always reading strictly less than n bits. For redundant counters where we only need to represent numbers in the range [0,...,L] for some integer L bits, we define the efficiency...

  7. The Economics of BitCoin Price Formation

    OpenAIRE

    Ciaian, Pavel; Rajcaniova, Miroslava; Kancs, d'Artis

    2014-01-01

    This is the first article that studies BitCoin price formation by considering both the traditional determinants of currency price, e.g., market forces of supply and demand, and digital currencies specific factors, e.g., BitCoin attractiveness for investors and users. The conceptual framework is based on the Barro (1979) model, from which we derive testable hypotheses. Using daily data for five years (2009–2015) and applying time-series analytical mechanisms, we find that market forces and Bit...

  8. Fitness Probability Distribution of Bit-Flip Mutation.

    Science.gov (United States)

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  9. Different Mass Processing Services in a Bit Repository

    DEFF Research Database (Denmark)

    Jurik, Bolette; Zierau, Eld

    2011-01-01

    This paper investigates how a general bit repository mass processing service using different programming models and platforms can be specified. Such a service is needed in large data archives, especially libraries, where different ways of doing mass processing is needed for different digital...... library tasks. Different hardware platforms as basis for mass processing will usually already exist for libraries as part of a bit preservation solution for long term bit preservation. The investigation of a general mass processing service shows that different aspects of mass processing are too dependent...

  10. Bit Manipulation Accelerator for Communication Systems Digital Signal Processor

    Directory of Open Access Journals (Sweden)

    Jeong Sug H

    2005-01-01

    Full Text Available This paper proposes application-specific instructions and their bit manipulation unit (BMU, which efficiently support scrambling, convolutional encoding, puncturing, interleaving, and bit stream multiplexing. The proposed DSP employs the BMU supporting parallel shift and XOR (exclusive-OR operations and bit insertion/extraction operations on multiple data. The proposed architecture has been modeled by VHDL and synthesized using the SEC 0.18 m standard cell library and the gate count of the BMU is only about 1700 gates. Performance comparisons show that the number of clock cycles can be reduced about for scrambling, convolutional encoding, and interleaving compared with existing DSPs.

  11. The Spanish version of the Patient-Rated Wrist Evaluation outcome measure: cross-cultural adaptation process, reliability, measurement error and construct validity.

    Science.gov (United States)

    Rosales, Roberto S; García-Gutierrez, Rayco; Reboso-Morales, Luis; Atroshi, Isam

    2017-08-24

    The Patient-Rated Wrist Evaluation (PRWE) is a widely used measure of patient-reported disability and pain related to wrist disorders. We performed cross-cultural adaptation of the PRWE into Spanish (Spain) and assessed reliability and construct validity in patients with distal radius fracture. Adaptation of the English version to Spanish (Spain) was performed using translation/back translation methodology. The measurement properties of the PRWE-Spanish were assessed in a sample of 40 consecutive patients (31 women), mean age 58 (SD 19) years, with extra-articular distal radius fractures treated with closed reduction and cast. The patients completed the PRWE-Spanish and the standard Spanish versions of the 11-item Disabilities of the Arm, Shoulder and Hand (QuickDASH) and EQ-5D questionnaires at baseline (health status before fracture) and at 8, 9, 12, and 13 weeks after treatment. Internal-consistency reliability was assessed with the Cronbach alpha coefficient and test-retest reliability with the intraclass correlation coefficient (ICC) comparing responses at 8 and 9 weeks and responses at 12 and 13 weeks. Cross-sectional precision was analyzed with the Standard Error of the Measurement (SEM). Longitudinal precision for test-retest reliability coefficient was analyzed with the Standard Error of the Measurement difference (SEMdiff) and the Minimal Detectable Change at 90% (MDC 90 ) and 95% (MDC 95 ) confidence levels. For assessing construct validity we hypothesized that the PRWE-Spanish (lower score indicates less disability and pain) would have strong positive correlation with the QuickDASH (lower score indicates less disability) and moderate negative correlation with the EQ-5D Index (higher score indicates better health); Spearman correlation coefficient (r) was used. For the PRWE total score, Cronbach alpha was 0.98 (SEM = 2.67) at baseline and 0.96 (SEM = 4.37) at 8 weeks. For test-retest reliability ICC was 0.94 (8 and 9 weeks) and 0.96 (12 and 13

  12. Refractive Errors

    Science.gov (United States)

    ... Conditions Frequently Asked Questions Español Condiciones Chinese Conditions Refractive Errors in Children En Español Read in Chinese How does the ... birth and can occur at any age. The prevalence of myopia is low in US children under the age of eight, but much higher ...

  13. Progressive significance map and its application to error-resilient image transmission.

    Science.gov (United States)

    Hu, Yang; Pearlman, William A; Li, Xin

    2012-07-01

    Set partition coding (SPC) has shown tremendous success in image compression. Despite its popularity, the lack of error resilience remains a significant challenge to the transmission of images in error-prone environments. In this paper, we propose a novel data representation called the progressive significance map (prog-sig-map) for error-resilient SPC. It structures the significance map (sig-map) into two parts: a high-level summation sig-map and a low-level complementary sig-map (comp-sig-map). Such a structured representation of the sig-map allows us to improve its error-resilient property at the price of only a slight sacrifice in compression efficiency. For example, we have found that a fixed-length coding of the comp-sig-map in the prog-sig-map renders 64% of the coded bitstream insensitive to bit errors, compared with 40% with that of the conventional sig-map. Simulation results have shown that the prog-sig-map can achieve highly competitive rate-distortion performance for binary symmetric channels while maintaining low computational complexity. Moreover, we note that prog-sig-map is complementary to existing independent packetization and channel-coding-based error-resilient approaches and readily lends itself to other source coding applications such as distributed video coding.

  14. Network-Aware Reference Frame Control for Error-Resilient H.264/AVC Video Streaming Service

    Directory of Open Access Journals (Sweden)

    Hui-Seon Gang

    2016-01-01

    Full Text Available To provide high-quality video streaming services in a mobile communication network, a large bandwidth and reliable channel conditions are required. However, mobile communication services still encounter limited bandwidth and varying channel conditions. The streaming video system compresses video with motion estimation and compensation using multiple reference frames. The multiple reference frame structure can reduce the compressed bit rate of video; however, it can also cause significant error propagation when the video in the channel is damaged. Even though the streaming video system includes error-resilience tools to mitigate quality degradation, error propagation is inevitable because all errors can not be refreshed under the multiple reference frame structure. In this paper, a new network-aware error-resilient streaming video system is introduced. The proposed system can mitigate error propagation by controlling the number of reference frames based on channel status. The performance enhancement is demonstrated by comparing the proposed method to the conventional streaming system using static number of reference frames.

  15. Error handling for the CDF Silicon Vertex Tracker

    CERN Document Server

    Belforte, S; Dell'Orso, Mauro; Donati, S; Galeotti, S; Giannetti, P; Morsani, F; Punzi, G; Ristori, L; Spinella, F; Zanetti, A M

    2000-01-01

    The SVT online tracker for the CDF upgrade reconstructs two- dimensional tracks using information from the Silicon Vertex detector (SVXII) and the Central Outer Tracker (COT). The SVT has an event rate of 100 kHz and a latency time of 10 mu s. The system is composed of 104 VME 9U digital boards (of 8 different types) and it is implemented as a data driven architecture. Each board runs on its own 30 MHz clock. Since the data output from the SVT (few Mbytes/sec) are a small fraction of the input data (200 Mbytes/sec), it is extremely difficult to track possible internal errors by using only the output stream. For this reason several diagnostic tools have been implemented: local error registers, error bits propagated through the data streams and the Spy Buffer system. Data flowing through each input and output stream of every board are continuously copied to memory banks named Spy Buffers which act as built in logic state analyzers hooked continuously to internal data streams. The contents of all buffers can be ...

  16. Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping

    Science.gov (United States)

    Piedrafita, Álvaro; Renes, Joseph M.

    2017-12-01

    We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.

  17. Effect of alteration of translation error rate on enzyme microheterogeneity as assessed by variation in single molecule electrophoretic mobility and catalytic activity.

    Science.gov (United States)

    Nichols, Ellert R; Shadabi, Elnaz; Craig, Douglas B

    2009-06-01

    The role of translation error for Escherichia coli individual beta-galactosidase molecule catalytic and electrophoretic heterogeneity was investigated using CE-LIF. An E. coli rpsL mutant with a hyperaccurate translation phenotype produced enzyme molecules that exhibited significantly less catalytic heterogeneity but no reduction of electrophoretic heterogeneity. Enzyme expressed with streptomycin-induced translation error had increased thermolability, lower activity, and no significant change to catalytic or electrophoretic heterogeneity. Modeling of the electrophoretic behaviour of beta-galactosidase suggested that variation of the hydrodynamic radius may be the most significant contributor to electrophoretic heterogeneity.

  18. 2015 Big Windy, Oregon 4-Band 8 Bit Imagery

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data are LiDAR orthorectified aerial photographs (8-bit GeoTIFF format) within the Oregon Lidar Consortium Big Windy project area. The imagery coverage is...

  19. 2014 Metro, Oregon 4-Band 8 Bit Imagery

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data are LiDAR orthorectified aerial photographs (8-bit GeoTIFF format) within the Oregon Lidar Consortium Portland project area. The imagery coverage is...

  20. Experimental bit commitment based on quantum communication and special relativity.

    Science.gov (United States)

    Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Kent, A; Gisin, N; Wehner, S; Zbinden, H

    2013-11-01

    Bit commitment is a fundamental cryptographic primitive in which Bob wishes to commit a secret bit to Alice. Perfectly secure bit commitment between two mistrustful parties is impossible through asynchronous exchange of quantum information. Perfect security is however possible when Alice and Bob split into several agents exchanging classical and quantum information at times and locations suitably chosen to satisfy specific relativistic constraints. Here we report on an implementation of a bit commitment protocol using quantum communication and special relativity. Our protocol is based on [A. Kent, Phys. Rev. Lett. 109, 130501 (2012)] and has the advantage that it is practically feasible with arbitrary large separations between the agents in order to maximize the commitment time. By positioning agents in Geneva and Singapore, we obtain a commitment time of 15 ms. A security analysis considering experimental imperfections and finite statistics is presented.

  1. Instrumented Bit for In-Situ Spectroscopy (IBISS), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to build and critically test the Instrumented Bit for In-Situ Spectroscopy (IBISS), a novel system for in-situ, rapid analyses of planetary subsurface...

  2. 2012 Sandy River, Oregon Natural Color 8 Bit Imagery

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data are LiDAR orthorectified aerial photographs (8-bit GeoTIFF format) within the Oregon Lidar Consortium Sandy River project area. The imagery coverage is...

  3. Pseudo-random bit generator based on Chebyshev map

    Science.gov (United States)

    Stoyanov, B. P.

    2013-10-01

    In this paper, we study a pseudo-random bit generator based on two Chebyshev polynomial maps. The novel derivative algorithm shows perfect statistical properties established by number of statistical tests.

  4. Detection and Symbol Synchronization for Multiple-bit Per Photon Optical Communications

    Science.gov (United States)

    Marshall, W. K.

    1985-01-01

    Methods of detection and synchronization in a highly efficient direct detection optical communication system are reported. Results of measurements on this moderate-rate demonstration system capable of transmitting 2.5 bits/detected photon in low-background situations indicate that symbol slot synchronization is not a problem, and that a simple symbol detection scheme is adequate for this situation. This system is a candidate for interplanetary optical communications.

  5. Ultrahigh-Spectral-Efficiency WDM/SDM Transmission Using PDM-1024-QAM Probabilistic Shaping With Adaptive Rate

    DEFF Research Database (Denmark)

    Hu, Hao; Yankov, Metodi Plamenov; Da Ros, Francesco

    2018-01-01

    efficiencies for individualWDM/SDM channels have been applied according to their channel conditions, by adjusting the SD-FEC overhead without changing the modulation format. Probabilistically shaped PDM-1024-QAM has been used to further increase the aggregated achievable rate due to the added performance...... efficiency of 297.82 bit/s/Hz on a 12.5-GHz grid and 7.01-Tbit/s spatial-super-channel on a 25-GHz grid without multiple-input multiple-output (MIMO) processing. Actual soft-decision forward error correction (SD-FEC) decoding was employed to obtain error-free performance, and adaptive rates and spectral...

  6. Content Progressive Coding of Limited Bits/pixel Images

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Forchhammer, Søren

    1999-01-01

    A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF.......A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF....

  7. 8-Bit Gray Scale Images of Fingerprint Image Groups

    Science.gov (United States)

    NIST 8-Bit Gray Scale Images of Fingerprint Image Groups (Web, free access)   The NIST database of fingerprint images contains 2000 8-bit gray scale fingerprint image pairs. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  8. Quantum states representing perfectly secure bits are always distillable

    International Nuclear Information System (INIS)

    Horodecki, Pawel; Augusiak, Remigiusz

    2006-01-01

    It is proven that recently introduced states with perfectly secure bits of cryptographic key (private states representing secure bit) [K. Horodecki et al., Phys. Rev. Lett. 94, 160502 (2005)] as well as its multipartite and higher dimension generalizations always represent distillable entanglement. The corresponding lower bounds on distillable entanglement are provided. We also present a simple alternative proof that for any bipartite quantum state entanglement cost is an upper bound on a distillable cryptographic key in a bipartite scenario

  9. VCSEL Scaling, Laser Integration on Silicon, and Bit Energy

    Science.gov (United States)

    2017-03-01

    Silicon Photonics: Figure 1 shows the electronic circuitry and comparison key to analyzing photonic bit energies for transceivers used in data centers...VCSEL Scaling, Laser Integration on Silicon , and Bit Energy D.G. Deppe,1,2 Ja. Leshin,1 and Je. Leshin1 1CREOL, College of Optics & Photonics...laser; (000.0000) General [For codes, see www.opticsinfobase.org/submit/ocis.] Keywords: VCSELs, Nanoscale lasers, optical interconnects, silicon

  10. Comparison and status of 32 bit backplane bus architectures

    International Nuclear Information System (INIS)

    Muller, K.D.

    1985-01-01

    With the introduction of 32 bit microprocessors several new 32 bit backplane bus architectures have been developed and are in the process for standardization. Among these are Future Bus (IEEE P896.1), VME-Bus (IEEE 1014), MULTIBUS II, Nu-Bus and Fastbus (IEEE 960). The paper describes and compares the main features of these bus architectures and mentions the status of national and international standardization efforts

  11. On the Performance of Multihop Heterodyne FSO Systems With Pointing Errors

    KAUST Repository

    Zedini, Emna

    2015-03-30

    This paper reports the end-to-end performance analysis of a multihop free-space optical system with amplify-and-forward (AF) channel-state-information (CSI)-assisted or fixed-gain relays using heterodyne detection over Gamma–Gamma turbulence fading with pointing error impairments. In particular, we derive new closed-form results for the average bit error rate (BER) of a variety of binary modulation schemes and the ergodic capacity in terms of the Meijer\\'s G function. We then offer new accurate asymptotic results for the average BER and the ergodic capacity at high SNR values in terms of simple elementary functions. For the capacity, novel asymptotic results at low and high average SNR regimes are also obtained via an alternative moments-based approach. All analytical results are verified via computer-based Monte-Carlo simulations.

  12. Influence of Transmitting Pointing Errors on High Speed WDM-AMI-Is-OWC Transmission System

    Science.gov (United States)

    Shatnawi, Abdallah Ahmad; Bin Mohd Warip, Mohd Nazri; Safar, Anuar Mat

    2017-12-01

    Inter-satellite communication is one of the revolutionary techniques that can be used to transmit the high speed date between satellites. However, space turbulences such as transmitting pointing errors play a significant role while designing inter-satellite communication systems. Those turbulences cause shutdown of inter-satellite link due to increase of attenuation during data transmission through link. The present work aims to develop an integrated data transmission system incorporating alternate mark inversion (AMI), wavelength division multiplexing (WDM), and polarization interleaving (PI) scheme for transmitting data 160 Gbps over inter-satellite link of 1,000 km under the influence of space turbulences. The performance of the integrated data transmission of 160 Gbps data up to 1,000 km will be evaluated under the influence of space turbulences by means of signal to noise ratio (SNR), total received power, bit error rate and eye diagram.

  13. The Influence of Gaussian Signaling Approximation on Error Performance in Cellular Networks

    KAUST Repository

    Afify, Laila H.

    2015-08-18

    Stochastic geometry analysis for cellular networks is mostly limited to outage probability and ergodic rate, which abstracts many important wireless communication aspects. Recently, a novel technique based on the Equivalent-in-Distribution (EiD) approach is proposed to extend the analysis to capture these metrics and analyze bit error probability (BEP) and symbol error probability (SEP). However, the EiD approach considerably increases the complexity of the analysis. In this paper, we propose an approximate yet accurate framework, that is also able to capture fine wireless communication details similar to the EiD approach, but with simpler analysis. The proposed methodology is verified against the exact EiD analysis in both downlink and uplink cellular networks scenarios.

  14. 4-bit digital to analog converter using R-2R ladder and binary weighted resistors

    Science.gov (United States)

    Diosanto, J.; Batac, M. L.; Pereda, K. J.; Caldo, R.

    2017-06-01

    The use of a 4-bit digital-to-analog converter using two methods; Binary Weighted Resistors and R-2R Ladder is designed and presented in this paper. The main components that were used in constructing both circuits were different resistor values, operational amplifier (LM741) and single pole double throw switches. Both circuits were designed using MULTISIM software to be able to test the circuit for its ideal application and FRITZING software for the layout designing and fabrication to the printed circuit board. The implementation of both systems in an actual circuit benefits in determining and comparing the advantages and disadvantages of each. It was realized that the binary weighted circuit is more efficient DAC, having lower percentage error of 0.267% compared to R-2R ladder circuit which has a minimum of percentage error of 4.16%.

  15. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    Science.gov (United States)

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  16. MEKANISME SEGMENTASI LAJU BIT PADA DYNAMIC ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    Muhammad Audy Bazly

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG- DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  17. Binary Biometrics: An Analytic Framework to Estimate the Bit Error Probability under Gaussian Assumption

    NARCIS (Netherlands)

    Kelkboom, E.J.C.; Molina, G.; Kevenaar, T.A.M.; Veldhuis, Raymond N.J.; Jonker, Willem

    2008-01-01

    In recent years the protection of biometric data has gained increased interest from the scientific community. Methods such as the helper data system, fuzzy extractors, fuzzy vault and cancellable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic

  18. FSO channel estimation for OOK modulation with APD receiver over atmospheric turbulence and pointing errors

    Science.gov (United States)

    Dabiri, Mohammad Taghi; Sadough, Seyed Mohammad Sajad; Khalighi, Mohammad Ali

    2017-11-01

    In the free-space optical (FSO) links, atmospheric turbulence and pointing errors lead to scintillation in the received signal. Due to its ease of implementation, intensity modulation with direct detection (IM/DD) based on ON-OFF-keying(OOK) is a popular signaling scheme in these systems. For long-haul FSO links, avalanche photo diodes (APDs) are commonly used, which provide an internal gain in photo-detection, allowing larger transmission ranges, as compared with PIN photo-detector (PD) counterparts. Since optimal OOK detection at the receiver requires the knowledge of the instantaneous channel fading coefficient, channel estimation is an important task that can considerably impact the link performance. In this paper, we investigate the channel estimation issue when using an APD at the receiver. Here, optimal signal detection is quite more delicate than in the case of using a PIN PD. In fact, given that APD-based receivers are usually shot-noise limited, the receiver noise will have a different distribution depending on whether the transmitted bit is '0' or '1', and moreover, its statistics are further affected by the scintillation. To deal with this, we first consider minimum mean-square-error (MMSE), maximum a posteriori probability (MAP) and maximum likelihood (ML) channel estimation over an observation window encompassing several consecutive received OOK symbols. Due to the high computational complexity of these methods, in a second step, we propose an ML channel estimator based on the expectation-maximization (EM) algorithm which has a low implementation complexity, making it suitable for high data-rate FSO communications. Numerical results show that for a sufficiently large observation window, by using the proposed EM channel estimator, we can achieve bit error rate performance very close to that with perfect channel state information. We also derive the Cramer-Rao lower bound (CRLB) of MSE of estimation errors and show that for a large enough observation

  19. Adaptive Rate Control Algorithm for H.264/AVC Considering Scene Change

    Directory of Open Access Journals (Sweden)

    Xiao Chen

    2013-01-01

    Full Text Available Scene change in H.264 video sequences has significant impact on the video communication quality. This paper presents a novel adaptive rate control algorithm with little additional calculation for H.264/AVC based on the scene change expression. According to the frame complexity quotiety, we define a scene change factor. It is used to allocate bits for each frame adaptively. Experimental results show that it can handle the scene change effectively. Our algorithm, in comparison to the JVT-G012 algorithm, reduces rate error and improves average peak signal-noise ratio with smaller deviation. It cannot only control bit rate accurately, but also get better video quality with the lower encoder buffer fullness to improve the quality of service.

  20. Entanglement and Quantum Error Correction with Superconducting Qubits

    Science.gov (United States)

    Reed, Matthew

    2015-03-01

    Quantum information science seeks to take advantage of the properties of quantum mechanics to manipulate information in ways that are not otherwise possible. Quantum computation, for example, promises to solve certain problems in days that would take a conventional supercomputer the age of the universe to decipher. This power does not come without a cost however, as quantum bits are inherently more susceptible to errors than their classical counterparts. Fortunately, it is possible to redundantly encode information in several entangled qubits, making it robust to decoherence and control imprecision with quantum error correction. I studied one possible physical implementation for quantum computing, employing the ground and first excited quantum states of a superconducting electrical circuit as a quantum bit. These ``transmon'' qubits are dispersively coupled to a superconducting resonator used for readout, control, and qubit-qubit coupling in the cavity quantum electrodynamics (cQED) architecture. In this talk I will give an general introduction to quantum computation and the superconducting technology that seeks to achieve it before explaining some of the specific results reported in my thesis. One major component is that of the first realization of three-qubit quantum error correction in a solid state device, where we encode one logical quantum bit in three entangled physical qubits and detect and correct phase- or bit-flip errors using a three-qubit Toffoli gate. My thesis is available at arXiv:1311.6759.