WorldWideScience

Sample records for bit error ratio

  1. Framed bit error rate testing for 100G ethernet equipment

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    of performing bit error rate testing at 100Gbps. In particular, we show how Bit Error Rate Testing (BERT) can be performed over an aggregated 100G Attachment Unit Interface (CAUI) by encapsulating the test data in Ethernet frames at line speed. Our results show that framed bit error rate testing can...... functionality besides the bit error rate tester....

  2. Modeling of Bit Error Rate in Cascaded 2R Regenerators

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper

    2006-01-01

    This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments and the rege......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...

  3. Bit Error Rate Minimizing Channel Shortening Equalizers for Single Carrier Cyclic Prefixed Systems

    National Research Council Canada - National Science Library

    Martin, Richard K; Vanbleu, Koen; Ysebaert, Geert

    2007-01-01

    .... Previous work on channel shortening has largely been in the context of digital subscriber lines, a wireline system that allows bit allocation, thus it has focused on maximizing the bit rate for a given bit error rate (BER...

  4. Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory

    Science.gov (United States)

    Yan, Daqin; Wang, Fuzhong; Wang, Shuo

    2017-12-01

    Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.

  5. Frame, bit and chip error rate evaluation for a DSSS communication system

    Directory of Open Access Journals (Sweden)

    F.R. Castillo–Soria

    2008-07-01

    Full Text Available The relation between chips, bits and frames error rates in the Additive White Gaussian Noise (AWGN channel for a Direct Sequence Spread Spectrum (DSSS system, in Multiple Access Interference (MAI conditions is evaluated. A simple error–correction code (ECC for the Frame Error Rate (FER evaluation is used. 64 bits (chips Pseudo Noise (PN sequences are employed for the spread spectrum transmission.An iterative Montecarlo (stochastic simulation is used to evaluate how many errors on chips are introduced for channel effects and how they are related to the bit errors. It can be observed how the bit errors may eventually cause a frame error, i. e. CODEC or communication error. These results are useful for academics, engineers, or professionals alike.

  6. Linear transceiver design for nonorthogonal amplify-and-forward protocol using a bit error rate criterion

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2014-04-01

    The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol. © 2002-2012 IEEE.

  7. Euclidean Geometry Codes, minimum weight words and decodable error-patterns using bit-flipping

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn; Jonsson, Bergtor

    2005-01-01

    We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns.......We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns....

  8. An Alternative Method to Compute the Bit Error Probability of Modulation Schemes Subject to Nakagami- Fading

    Directory of Open Access Journals (Sweden)

    Madeiro Francisco

    2010-01-01

    Full Text Available Abstract This paper presents an alternative method for determining exact expressions for the bit error probability (BEP of modulation schemes subject to Nakagami- fading. In this method, the Nakagami- fading channel is seen as an additive noise channel whose noise is modeled as the ratio between Gaussian and Nakagami- random variables. The method consists of using the cumulative density function of the resulting noise to obtain closed-form expressions for the BEP of modulation schemes subject to Nakagami- fading. In particular, the proposed method is used to obtain closed-form expressions for the BEP of -ary quadrature amplitude modulation ( -QAM, -ary pulse amplitude modulation ( -PAM, and rectangular quadrature amplitude modulation ( -QAM under Nakagami- fading. The main contribution of this paper is to show that this alternative method can be used to reduce the computational complexity for detecting signals in the presence of fading.

  9. Analytical expression for the bit error rate of cascaded all-optical regenerators

    DEFF Research Database (Denmark)

    Mørk, Jesper; Öhman, Filip; Bischoff, S.

    2003-01-01

    We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed.......We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed....

  10. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    Science.gov (United States)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  11. Error Correcting Coding of Telemetry Information for Channel with Random Bit Inversions and Deletions

    Directory of Open Access Journals (Sweden)

    M. A. Elshafey

    2014-01-01

    Full Text Available This paper presents a method of error-correcting coding of digital information. Feature of this method is the treatment of cases of inversion and skip bits caused by a violation of the synchronization of the receiving and transmitting device or other factors. The article gives a brief overview of the features, characteristics, and modern methods of construction LDPC and convolutional codes, as well as considers a general model of the communication channel, taking into account the probability of bits inversion, deletion and insertion. The proposed coding scheme is based on a combination of LDPC coding and convolution coding. A comparative analysis of the proposed combined coding scheme and a coding scheme containing only LDPC coder is performed. Both of the two schemes have the same coding rate. Experiments were carried out on two models of communication channels at different probability values of bit inversion and deletion. The first model allows only random bit inversion, while the other allows both random bit inversion and deletion. In the experiments research and analysis of the delay decoding of convolutional coder is performed and the results of these experimental studies demonstrate the feasibility of planted coding scheme to improve the efficiency of data recovery that is transmitted over a communication channel with noises which allow random bit inversion and deletion without decreasing the coding rate.

  12. Detecting bit-flip errors in a logical qubit using stabilizer measurements

    Science.gov (United States)

    Ristè, D.; Poletto, S.; Huang, M.-Z.; Bruno, A.; Vesterinen, V.; Saira, O.-P.; DiCarlo, L.

    2015-01-01

    Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements. PMID:25923318

  13. Time Domain Equalizer Design Using Bit Error Rate Minimization for UWB Systems

    Directory of Open Access Journals (Sweden)

    Syed Imtiaz Husain

    2009-01-01

    Full Text Available Ultra-wideband (UWB communication systems occupy huge bandwidths with very low power spectral densities. This feature makes the UWB channels highly rich in resolvable multipaths. To exploit the temporal diversity, the receiver is commonly implemented through a Rake. The aim to capture enough signal energy to maintain an acceptable output signal-to-noise ratio (SNR dictates a very complicated Rake structure with a large number of fingers. Channel shortening or time domain equalizer (TEQ can simplify the Rake receiver design by reducing the number of significant taps in the effective channel. In this paper, we first derive the bit error rate (BER of a multiuser and multipath UWB system in the presence of a TEQ at the receiver front end. This BER is then written in a form suitable for traditional optimization. We then present a TEQ design which minimizes the BER of the system to perform efficient channel shortening. The performance of the proposed algorithm is compared with some generic TEQ designs and other Rake structures in UWB channels. It is shown that the proposed algorithm maintains a lower BER along with efficiently shortening the channel.

  14. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  15. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  16. A minimum bit error-rate detector for amplify and forward relaying systems

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2012-05-01

    In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.

  17. FPGA-based Bit-Error-Rate Tester for SEU-hardened Optical Links

    CERN Document Server

    Detraz, S; Moreira, P; Papadopoulos, S; Papakonstantinou, I; Seif El Nasr, S; Sigaud, C; Soos, C; Stejskal, P; Troska, J; Versmissen, H

    2009-01-01

    The next generation of optical links for future High-Energy Physics experiments will require components qualified for use in radiation-hard environments. To cope with radiation induced single-event upsets, the physical layer protocol will include Forward Error Correction (FEC). Bit-Error-Rate (BER) testing is a widely used method to characterize digital transmission systems. In order to measure the BER with and without the proposed FEC, simultaneously on several devices, a multi-channel BER tester has been developed. This paper describes the architecture of the tester, its implementation in a Xilinx Virtex-5 FPGA device and discusses the experimental results.

  18. Inclusive bit error rate analysis for coherent optical code-division multiple-access system

    Science.gov (United States)

    Katz, Gilad; Sadot, Dan

    2002-06-01

    Inclusive noise and bit error rate (BER) analysis for optical code-division multiplexing (OCDM) using coherence techniques is presented. The analysis contains crosstalk calculation of the mutual field variance for different number of users. It is shown that the crosstalk noise depends deeply on the receiver integration time, the laser coherence time, and the number of users. In addition, analytical results of the power fluctuation at the received channel due to the data modulation at the rejected channels are presented. The analysis also includes amplified spontaneous emission (ASE)-related noise effects of in-line amplifiers in a long-distance communication link.

  19. Bit Error Rate Performance of a MIMO-CDMA System Employing Parity-Bit-Selected Spreading in Frequency Nonselective Rayleigh Fading

    Directory of Open Access Journals (Sweden)

    Claude D'Amours

    2011-01-01

    Full Text Available We analytically derive the upper bound for the bit error rate (BER performance of a single user multiple input multiple output code division multiple access (MIMO-CDMA system employing parity-bit-selected spreading in slowly varying, flat Rayleigh fading. The analysis is done for spatially uncorrelated links. The analysis presented demonstrates that parity-bit-selected spreading provides an asymptotic gain of 10log(Nt dB over conventional MIMO-CDMA when the receiver has perfect channel estimates. This analytical result concurs with previous works where the (BER is determined by simulation methods and provides insight into why the different techniques provide improvement over conventional MIMO-CDMA systems.

  20. Beam-pointing error compensation method of phased array radar seeker with phantom-bit technology

    Directory of Open Access Journals (Sweden)

    Qiuqiu WEN

    2017-06-01

    Full Text Available A phased array radar seeker (PARS must be able to effectively decouple body motion and accurately extract the line-of-sight (LOS rate for target missile tracking. In this study, the real-time two-channel beam pointing error (BPE compensation method of PARS for LOS rate extraction is designed. The PARS discrete beam motion principium is analyzed, and the mathematical model of beam scanning control is finished. According to the principle of the antenna element shift phase, both the antenna element shift phase law and the causes of beam-pointing error under phantom-bit conditions are analyzed, and the effect of BPE caused by phantom-bit technology (PBT on the extraction accuracy of the LOS rate is examined. A compensation method is given, which includes coordinate transforms, beam angle margin compensation, and detector dislocation angle calculation. When the method is used, the beam angle margin in the pitch and yaw directions is calculated to reduce the effect of the missile body disturbance and to improve LOS rate extraction precision by compensating for the detector dislocation angle. The simulation results validate the proposed method.

  1. Novel MGF-based expressions for the average bit error probability of binary signalling over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2014-04-01

    The main idea in the moment generating function (MGF) approach is to alternatively express the conditional bit error probability (BEP) in a desired exponential form so that possibly multi-fold performance averaging is readily converted into a computationally efficient single-fold averaging - sometimes into a closed-form - by means of using the MGF of the signal-to-noise ratio. However, as presented in [1] and specifically indicated in [2] and also to the best of our knowledge, there does not exist an MGF-based approach in the literature to represent Wojnar\\'s generic BEP expression in a desired exponential form. This paper presents novel MGF-based expressions for calculating the average BEP of binary signalling over generalized fading channels, specifically by expressing Wojnar\\'s generic BEP expression in a desirable exponential form. We also propose MGF-based expressions to explore the amount of dispersion in the BEP for binary signalling over generalized fading channels.

  2. A 10-bit ratio-independent cyclic ADC with offset canceling for a CMOS image sensor

    Science.gov (United States)

    Kaiming, Nie; Suying, Yao; Jiangtao, Xu; Zhaorui, Jiang

    2014-03-01

    A 10-bit ratio-independent switch-capacitor (SC) cyclic analog-to-digital converter (ADC) with offset canceling for a CMOS image sensor is presented. The proposed ADC completes an N-bit conversion in 1.5N clock cycles with one operational amplifier. Combining ratio-independent and polarity swapping techniques, the conversion characteristic of the proposed cyclic ADC is inherently insensitive both to capacitor ratio and to amplifier offset voltage. Therefore, the circuit can be realized in a small die area and it is suitable to serve as the column-parallel ADC in CMOS image sensors. A prototype ADC is fabricated in 0.18-μm one-poly four-metal CMOS technology. The measured results indicate that the ADC has a signal-to-noise and distortion ratio (SNDR) of 53.6 dB and a DNL of +0:12/-0:14 LSB at a conversion rate of 600 kS/s. The standard deviation of the offset variation of the ADC is reduced from 2.5 LSB to 0.5 LSB. Its power dissipation is 250 μW with a 1.8 V supply, and its area is 0.03 × 0.8 mm2.

  3. Multilayered optical bit memory with a high signal-to-noise ratio in fluorescent polymethylmethacrylate

    Science.gov (United States)

    Nie, Zhaogang; Lee, Heungyeol; Yoo, Hyeonggeun; Lee, Youlee; Kim, Younshil; Lim, Ki-Soo; Lee, Myeongkyu

    2009-03-01

    We report on the three-dimensional optical memory utilizing a photoluminescence (PL) change in polymethylmethacrylate. Irradiation with a femtosecond pulsed laser (800 nm, 1 kHz, 100 fs) induced a strong PL spectrum in the visible range, which may result from the photogeneration of emissive radicals. Multilayered patterns were recorded inside the bulk sample by tightly focusing a pulsed laser beam. The pattern images were read out by a reflection-type fluorescent confocal microscope which detected the blue-green emission at 410-510 nm. The stored bits were retrieved with a high signal-to-noise ratio in the absence of any cross-talk.

  4. SITE project. Phase 1: Continuous data bit-error-rate testing

    Science.gov (United States)

    Fujikawa, Gene; Kerczewski, Robert J.

    1992-09-01

    The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.

  5. Performance analysis for the bit-error rate of SAC-OCDMA systems

    Science.gov (United States)

    Feng, Gang; Cheng, Wenqing; Chen, Fujun

    2015-09-01

    Under low power, Gaussian statistics by invoking the central limit theorem is feasible to predict the upper bound in the spectral-amplitude-coding optical code division multiple access (SAC-OCDMA) system. However, this case severely underestimates the bit-error rate (BER) performance of the system under high power assumption. Fortunately, the exact negative binomial (NB) model is a perfect replacement for the Gaussian model in the prediction and evaluation. Based on NB statistics, a more accurate closed-form expression is analyzed and derived for the SAC-OCDMA system. The experiment shows that the obtained expression provides a more precise prediction of the BER performance under the low and high power assumptions.

  6. The effect of narrow-band digital processing and bit error rate on the intelligibility of ICAO spelling alphabet words

    Science.gov (United States)

    Schmidt-Nielsen, Astrid

    1987-08-01

    The recognition of ICAO spelling alphabet words (ALFA, BRAVO, CHARLIE, etc.) is compared with diagnostic rhyme test (DRT) scores for the same conditions. The voice conditions include unprocessed speech; speech processed through the DOD standard linear-predictive-coding algorithm operating at 2400 bit/s with random error rates of 0, 2, 5, 8, and 12 percent; and speech processed through an 800-bit/s pattern-matching algorithm. The results suggest that, with distinctive vocabularies, word intelligibility can be expected to remain high even when DRT scores fall into the poor range. However, once the DRT scores fall below 75 percent, the intelligibility can be expected to fall off rapidly; at DRT scores below 50, the recognition of a distinctive vocabulary should also fall below 50 percent.

  7. Bit Error Rate Due to Misalignment of Earth Station Antenna Pointing to Satellite

    Directory of Open Access Journals (Sweden)

    Wahyu Pamungkas

    2010-04-01

    Full Text Available One problem causing reduction of energy in satellite communications system is the misalignment of earth station antenna pointing to satellite. Error in pointing would affect the quality of information signal to energy bit in earth station. In this research, error in pointing angle occurred only at receiver (Rx antenna, while the transmitter (Tx antennas precisely point to satellite. The research was conducted towards two satellites, namely TELKOM-1 and TELKOM-2. At first, measurement was made by directing Tx antenna precisely to satellite, resulting in an antenna pattern shown by spectrum analyzer. The output from spectrum analyzers is drawn with the right scale to describe swift of azimuth and elevation pointing angle towards satellite. Due to drifting from the precise pointing, it influenced the received link budget indicated by pattern antenna. This antenna pattern shows reduction of power level received as a result of pointing misalignment. As a conclusion, the increasing misalignment of pointing to satellite would affect in the reduction of received signal parameters link budget of down-link traffic.

  8. On the average capacity and bit error probability of wireless communication systems

    KAUST Repository

    Yilmaz, Ferkan

    2011-12-01

    Analysis of the average binary error probabilities and average capacity of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function-based unified expression for both average binary error probabilities and average capacity of single and multiple link communication with maximal ratio combining. It is a matter to note that the generic unified expression offered in this paper can be easily calculated and that is applicable to a wide variety of fading scenarios, and the mathematical formalism is illustrated with the generalized Gamma fading distribution in order to validate the correctness of our newly derived results. © 2011 IEEE.

  9. Power penalties for multi-level PAM modulation formats at arbitrary bit error rates

    Science.gov (United States)

    Kaliteevskiy, Nikolay A.; Wood, William A.; Downie, John D.; Hurley, Jason; Sterlingov, Petr

    2016-03-01

    There is considerable interest in combining multi-level pulsed amplitude modulation formats (PAM-L) and forward error correction (FEC) in next-generation, short-range optical communications links for increased capacity. In this paper we derive new formulas for the optical power penalties due to modulation format complexity relative to PAM-2 and due to inter-symbol interference (ISI). We show that these penalties depend on the required system bit-error rate (BER) and that the conventional formulas overestimate link penalties. Our corrections to the standard formulas are very small at conventional BER levels (typically 1×10-12) but become significant at the higher BER levels enabled by FEC technology, especially for signal distortions due to ISI. The standard formula for format complexity, P = 10log(L-1), is shown to overestimate the actual penalty for PAM-4 and PAM-8 by approximately 0.1 and 0.25 dB respectively at 1×10-3 BER. Then we extend the well-known PAM-2 ISI penalty estimation formula from the IEEE 802.3 standard 10G link modeling spreadsheet to the large BER case and generalize it for arbitrary PAM-L formats. To demonstrate and verify the BER dependence of the ISI penalty, a set of PAM-2 experiments and Monte-Carlo modeling simulations are reported. The experimental results and simulations confirm that the conventional formulas can significantly overestimate ISI penalties at relatively high BER levels. In the experiments, overestimates up to 2 dB are observed at 1×10-3 BER.

  10. Bit-error-rate performance analysis of self-heterodyne detected radio-over-fiber links using phase and intensity modulation

    DEFF Research Database (Denmark)

    Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso

    2010-01-01

    We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self......-heterodyne receivers perform down-conversion of radio frequency (RF) subcarrier signal. A theoretical model including noise analysis is constructed to calculate the Q factor and estimate the BER performance. Furthermore, we experimentally validate our prediction in the theoretical modeling. Both the experimental...

  11. Average bit error rate performance analysis of subcarrier intensity modulated MRC and EGC FSO systems with dual branches over M distribution turbulence channels

    Science.gov (United States)

    Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang

    2015-07-01

    Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.

  12. PERBANDINGAN BIT ERROR RATE KODE REED-SOLOMON DENGAN KODE BOSE-CHAUDHURI-HOCQUENGHEM MENGGUNAKAN MODULASI 32-FSK

    Directory of Open Access Journals (Sweden)

    Eva Yovita Dwi Utami

    2016-11-01

    Full Text Available Kode Reed-Solomon (RS dan kode Bose-Chaudhuri-Hocquenghem (BCH merupakan kode pengoreksi error yang termasuk dalam jenis kode blok siklis. Kode pengoreksi error diperlukan pada sistem komunikasi untuk memperkecil error pada informasi yang dikirimkan. Dalam makalah ini, disajikan hasil penelitian kinerja BER sistem komunikasi yang menggunakan kode RS, kode BCH, dan sistem yang tidak menggunakan kode RS dan kode BCH, menggunakan modulasi 32-FSK pada kanal Additive White Gaussian Noise (AWGN, Rayleigh dan Rician. Kemampuan memperkecil error diukur menggunakan nilai Bit Error Rate (BER yang dihasilkan. Hasil penelitian menunjukkan bahwa kode RS seiring dengan penambahan nilai SNR, menurunkan nilai BER yang lebih curam bila dibandingkan sistem dengan kode BCH. Sedangkan kode BCH memberikan keunggulan saat SNR bernilai kecil, memiliki BER lebih baik daripada sistem dengan kode RS.

  13. Narrowband (LPC-10) Vocoder Performance under Combined Effects of Random Bit Errors and Jet Aircraft Cabin Noise.

    Science.gov (United States)

    1983-12-01

    Environment 52 34. Comparison of Regression Lines Estimating Scores for the Sustention Intelligibility Feature vs Bit Error Rate for the DOD LPC-10 Vocoder in...both conditions, the feature "sibilation" obtained the highest scores, and the features "graveness" and " sustention " received the poorest scores, but...were under much greater impairment in the noise environment. Details of the variations in scores for sustention are shown in Figure 34, and, for

  14. On Bit Error Probability and Power Optimization in Multihop Millimeter Wave Relay Systems

    KAUST Repository

    Chelli, Ali

    2018-01-15

    5G networks are expected to provide gigabit data rate to users via millimeter-wave (mmWave) communication technology. One of the major problem faced by mmWaves is that they cannot penetrate buildings. In this paper, we utilize multihop relaying to overcome the signal blockage problem in mmWave band. The multihop relay network comprises a source device, several relay devices and a destination device and uses device-todevice communication. Relay devices redirect the source signal to avoid the obstacles existing in the propagation environment. Each device amplifies and forwards the signal to the next device, such that a multihop link ensures the connectivity between the source device and the destination device. We consider that the relay devices and the destination device are affected by external interference and investigate the bit error probability (BEP) of this multihop mmWave system. Note that the study of the BEP allows quantifying the quality of communication and identifying the impact of different parameters on the system reliability. In this way, the system parameters, such as the powers allocated to different devices, can be tuned to maximize the link reliability. We derive exact expressions for the BEP of M-ary quadrature amplitude modulation (M-QAM) and M-ary phase-shift keying (M-PSK) in terms of multivariate Meijer’s G-function. Due to the complicated expression of the exact BEP, a tight lower-bound expression for the BEP is derived using a novel Mellin-approach. Moreover, an asymptotic expression for the BEP at high SIR regime is derived and used to determine the diversity and the coding gain of the system. Additionally, we optimize the power allocation at different devices subject to a sum power constraint such that the BEP is minimized. Our analysis reveals that optimal power allocation allows achieving more than 3 dB gain compared to the equal power allocation.This research work can serve as a framework for designing and optimizing mmWave multihop

  15. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    International Nuclear Information System (INIS)

    Chau, H.F.

    2002-01-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1√(5)≅27.6%, thereby making it the most error resistant scheme known to date

  16. Packet-Scheduling Algorithm by the Ratio of Transmit Power to the Transmission Bits in 3GPP LTE Downlink

    Directory of Open Access Journals (Sweden)

    Gil Gye-Tae

    2010-01-01

    Full Text Available Packet scheduler plays the central role in determining the overall performance of the 3GPP long-term evolution (LTE based on packet-switching operation. In this paper, a novel minimum transmit power-based (MP packet-scheduling algorithm is proposed that can achieve power-efficient transmission to the UEs while providing both system throughput gain and fairness improvement. The proposed algorithm is based on a new scheduling metric focusing on the ratio of the transmit power per bit and allocates the physical resource block (PRB to the UE that requires the least ratio of the transmit power per bit. Through computer simulation, the performance of the proposed MP packet-scheduling algorithm is compared with the conventional packet-scheduling algorithms by two primary criteria: fairness and throughput. The simulation results show that the proposed algorithm outperforms the conventional algorithms in terms of the fairness and throughput.

  17. Average bit error probability of binary coherent signaling over generalized fading channels subject to additive generalized gaussian noise

    KAUST Repository

    Soury, Hamza

    2012-06-01

    This letter considers the average bit error probability of binary coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closed form expression in terms of the Fox\\'s H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading and Nakagami-m fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters. © 2012 IEEE.

  18. 50 nm AlxOy resistive random access memory array program bit error reduction and high temperature operation

    Science.gov (United States)

    Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken

    2014-01-01

    In order to decrease program bit error rate (BER) of array-level operation in AlxOy resistive random access memory (ReRAM), program BERs are compared by using 4 × 4 basic set and reset with verify methods on multiple 1024-bit-pages in 50 nm, mega-bit class ReRAM arrays. Further, by using an optimized reset method, 8.5% total BER reduction is obtained after 104 write cycles due to avoiding under-reset or weak reset and ameliorating over-reset caused wear-out. Then, under-set and over-set are analyzed by tuning the set word line voltage (VWL) of ±0.1 V. Moderate set current shows the best total BER. Finally, 2000 write cycles are applied at 125 and 25 °C, respectively. Reset BER increases 28.5% at 125 °C whereas set BER has little difference, by using the optimized reset method. By applying write cycles over a 25 to 125 to 25 °C temperature variation, immediate reset BER change can be found after the temperature transition.

  19. Bit Error-Rate Minimizing Detector for Amplify-and-Forward Relaying Systems Using Generalized Gaussian Kernel

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-01-01

    In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.

  20. 16-bit error detection and correction (EDAC) controller design using FPGA for critical memory applications

    International Nuclear Information System (INIS)

    Misra, M.K.; Sridhar, N.; Krishnakumar, B.; Ilango Sambasivan, S.

    2002-01-01

    Full text: Complex electronic systems require the utmost reliability, especially when the storage and retrieval of critical data demands faultless operation, the system designer must strive for the highest reliability possible. Extra effort must be expended to achieve this reliability. Fortunately, not all systems must operate with these ultra reliability requirements. The majority of systems operate in an area where system failure is not hazardous. But the applications like nuclear reactors, medical and avionics are the areas where system failure may prove to have harsh consequences. High-density memories generate errors in their stored data due to external disturbances like power supply surges, system noise, natural radiation etc. These errors are called soft errors or transient errors, since they don't cause permanent damage to the memory cell. Hard errors may also occur on system memory boards. These hard errors occur if one RAM component or RAM cell fails and is stuck at either 0 or 1. Although less frequent, hard errors may cause a complete system failure. These are the major problems associated with memories

  1. Extending the lifetime of a quantum bit with error correction in superconducting circuits

    Science.gov (United States)

    Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S. M.; Jiang, L.; Mirrahimi, Mazyar; Devoret, M. H.; Schoelkopf, R. J.

    2016-08-01

    Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The ‘break-even’ point of QEC—at which the lifetime of a qubit exceeds the lifetime of the constituents of the system—has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0>f and |1>f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.

  2. Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link

    Directory of Open Access Journals (Sweden)

    Matteo Berioli

    2007-05-01

    Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.

  3. Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link

    Directory of Open Access Journals (Sweden)

    Berioli Matteo

    2007-01-01

    Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.

  4. Novel ultra-wideband photonic signal generation and transmission featuring digital signal processing bit error rate measurements

    DEFF Research Database (Denmark)

    Gibbon, Timothy Braidwood; Yu, Xianbin; Tafur Monroy, Idelfonso

    2009-01-01

    We propose the novel generation of photonic ultra-wideband signals using an uncooled DFB laser. For the first time we experimentally demonstrate bit-for-bit DSP BER measurements for transmission of a 781.25 Mbit/s photonic UWB signal.......We propose the novel generation of photonic ultra-wideband signals using an uncooled DFB laser. For the first time we experimentally demonstrate bit-for-bit DSP BER measurements for transmission of a 781.25 Mbit/s photonic UWB signal....

  5. A closed-form solution of the bit-error rate for optical wireless communication systems over atmospheric turbulence channels.

    Science.gov (United States)

    Dang, Anhong

    2011-02-14

    Atmospheric turbulence is a major limiting factor in an optical wireless communication (OWC) link. The turbulence distorts the phase of the propagating optical fields and limits the focusing capabilities of the telescope antennas. Hence, a detector array is required to capture the widespread signal energy in the focal-plane. This paper addresses the bit-error rate (BER) performance of optical wireless communication (OWC) systems employing a detector array in the presence of turbulence. Here, considering the gamma-gamma turbulence model, we propose a blind estimation scheme that provides the closed-form expression of the BER by exploiting the information of the data output of each pixel, which is based on the singular value decomposition of the sample matrix of the received signals after the code-matched filter. Instead of assuming spatially white additive noise, we consider the case where the noise spatial covariance matrix is unknown. The new method can be applied to either the single transmitter or the multi-transmitter cases. Simulation results for different Rytov variances are presented, which conform closely to the results of the proposed model.

  6. Bit error rate estimation for galvanic-type intra-body communication using experimental eye-diagram and jitter characteristics.

    Science.gov (United States)

    Li, Jia Wen; Chen, Xi Mei; Pun, Sio Hang; Mak, Peng Un; Gao, Yue Ming; Vai, Mang I; Du, Min

    2013-01-01

    Bit error rate (BER), which indicates the reliability of communicate channel, is one of the most important values in all kinds of communication system, including intra-body communication (IBC). In order to know more about IBC channel, this paper presents a new method of BER estimation for galvanic-type IBC using experimental eye-diagram and jitter characteristics. To lay the foundation for our methodology, the fundamental relationships between eye-diagram, jitter and BER are first reviewed. Then experiments based on human lower arm IBC are carried out using quadrature phase shift keying (QPSK) modulation scheme and 500 KHz carries frequency. In our IBC experiments, the symbol rate is from 10 Ksps to 100 Ksps, with two transmitted power settings, 0 dBm and -5 dBm. Finally, the BER results were obtained after calculation by experimental data through the relationships among eye-diagram, jitter and BER. These results are then compared with theoretical values and they show good agreement, especially when SNR is between 6 dB to 11 dB. Additionally, these results demonstrate assuming the noise of galvanic-type IBC channel as Additive White Gaussian Noise (AWGN) in previous study is applicable.

  7. A novel unified expression for the capacity and bit error probability of wireless communication systems over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2012-07-01

    Analysis of the average binary error probabilities (ABEP) and average capacity (AC) of wireless communications systems over generalized fading channels have been considered separately in past years. This paper introduces a novel moment generating function (MGF)-based unified expression for the ABEP and AC of single and multiple link communications with maximal ratio combining. In addition, this paper proposes the hyper-Fox\\'s H fading model as a unified fading distribution of a majority of the well-known generalized fading environments. As such, the authors offer a generic unified performance expression that can be easily calculated, and that is applicable to a wide variety of fading scenarios. The mathematical formulism is illustrated with some selected numerical examples that validate the correctness of the authors\\' newly derived results. © 1972-2012 IEEE.

  8. Introducing errors in progress ratios determined from experience curves

    NARCIS (Netherlands)

    van Sark, W.G.J.H.M.

    2008-01-01

    Progress ratios (PRs) derived from historical data in experience curves are used for forecasting development of many technologies as a means to model endogenous technical change in for instance climate–economy models. These forecasts are highly sensitive to uncertainties in the progress ratio. As a

  9. Development of a DMILL radhard multiplexer for the ATLAS Glink optical link and radiation test with a custom Bit ERror Tester

    CERN Document Server

    Dzahini, D

    2001-01-01

    A high speed digital optical data link has been developed for the front-end readout of the ATLAS electromagnetic calorimeter. It is based on a commercial serialiser commonly known as Glink, and a vertical cavity surface emitting laser. To be compatible with the data interface requirements, the Glink must be coupled to a radhard multiplexer that has been designed in DMILL technology to reduce the impact of neutron and gamma radiation on the link performance. This multiplexer features a very severe timing constraints related both to the front-end board output data and the Glink control and input signals. The full link has been successfully neutron and proton radiation tested by means of a custom bit error tester. (7 refs).

  10. Error mechanisms of the oscillometric fixed-ratio blood pressure measurement method.

    Science.gov (United States)

    Liu, Jiankun; Hahn, Jin-Oh; Mukkamala, Ramakrishna

    2013-03-01

    The oscillometric fixed-ratio method is widely employed for non-invasive measurement of systolic and diastolic pressures (SP and DP) but is heuristic and prone to error. We investigated the accuracy of this method using an established mathematical model of oscillometry. First, to determine which factors materially affect the errors of the method, we applied a thorough parametric sensitivity analysis to the model. Then, to assess the impact of the significant parameters, we examined the errors over a physiologically relevant range of those parameters. The main findings of this model-based error analysis of the fixed-ratio method are that: (1) SP and DP errors drastically increase as the brachial artery stiffens over the zero trans-mural pressure regime; (2) SP and DP become overestimated and underestimated, respectively, as pulse pressure (PP) declines; (3) the impact of PP on SP and DP errors is more obvious as the brachial artery stiffens over the zero trans-mural pressure regime; and (4) SP and DP errors can be as large as 58 mmHg. Our final and main contribution is a comprehensive explanation of the mechanisms for these errors. This study may have important implications when using the fixed-ratio method, particularly in subjects with arterial disease.

  11. Error Propagation in Isometric Log-ratio Coordinates for Compositional Data: Theoretical and Practical Considerations.

    Science.gov (United States)

    Mert, Mehmet Can; Filzmoser, Peter; Hron, Karel

    2016-01-01

    Compositional data, as they typically appear in geochemistry in terms of concentrations of chemical elements in soil samples, need to be expressed in log-ratio coordinates before applying the traditional statistical tools if the relative structure of the data is of primary interest. There are different possibilities for this purpose, like centered log-ratio coefficients, or isometric log-ratio coordinates. In both the approaches, geometric means of the compositional parts are involved, and it is unclear how measurement errors or detection limit problems affect their presentation in coordinates. This problem is investigated theoretically by making use of the theory of error propagation. Due to certain limitations of this approach, the effect of error propagation is also studied by means of simulations. This allows to provide recommendations for practitioners on the amount of error and on the expected distortion of the results, depending on the purpose of the analysis.

  12. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    Science.gov (United States)

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  13. Scintillation and bit error rate analysis of a phase-locked partially coherent flat-topped array laser beam in oceanic turbulence.

    Science.gov (United States)

    Yousefi, Masoud; Kashani, Fatemeh Dabbagh; Golmohammady, Shole; Mashal, Ahmad

    2017-12-01

    In this paper, the performance of underwater wireless optical communication (UWOC) links, which is made up of the partially coherent flat-topped (PCFT) array laser beam, has been investigated in detail. Providing high power, array laser beams are employed to increase the range of UWOC links. For characterization of the effects of oceanic turbulence on the propagation behavior of the considered beam, using the extended Huygens-Fresnel principle, an analytical expression for cross-spectral density matrix elements and a semi-analytical one for fourth-order statistical moment have been derived. Then, based on these expressions, the on-axis scintillation index of the mentioned beam propagating through weak oceanic turbulence has been calculated. Furthermore, in order to quantify the performance of the UWOC link, the average bit error rate (BER) has also been evaluated. The effects of some source factors and turbulent ocean parameters on the propagation behavior of the scintillation index and the BER have been studied in detail. The results of this investigation indicate that in comparison with the Gaussian array beam, when the source size of beamlets is larger than the first Fresnel zone, the PCFT array laser beam with the higher flatness order is found to have a lower scintillation index and hence lower BER. Specifically, in the sense of scintillation index reduction, using the PCFT array laser beams has a considerable benefit in comparison with the single PCFT or Gaussian laser beams and also Gaussian array beams. All the simulation results of this paper have been shown by graphs and they have been analyzed in detail.

  14. Measurement error analysis for polarization extinction ratio of multifunctional integrated optic chips.

    Science.gov (United States)

    Zhang, Haoliang; Yang, Jun; Li, Chuang; Yu, Zhangjun; Yang, Zhe; Yuan, Yonggui; Peng, Feng; Li, Hanyang; Hou, Changbo; Zhang, Jianzhong; Yuan, Libo; Xu, Jianming; Zhang, Chao; Yu, Quanfu

    2017-08-20

    Measurement error for the polarization extinction ratio (PER) of a multifunctional integrated optic chip (MFIOC) utilizing white light interferometry was analyzed. Three influence factors derived from the all-fiber device (or optical circuit) under test were demonstrated to be the main error sources, including: 1) the axis-alignment angle (AA) of the connection point between the extended polarization-maintaining fiber (PMF) and the chip PMF pigtail; 2) the oriented angle (OA) of the linear polarizer; and 3) the birefringence dispersion of PMF and the MFIOC chip. Theoretical calculations and experimental results indicated that by controlling the AA range within 0°±5°, the OA range within 45°±2° and combining with dispersion compensation process, the maximal PER measurement error can be limited to under 1.4 dB, with the 3σ uncertainty of 0.3 dB. The variations of birefringence dispersion effect versus PMF length were also discussed to further confirm the validity of dispersion compensation. A MFIOC with the PER of ∼50  dB was experimentally tested, and the total measurement error was calculated to be ∼0.7  dB, which proved the effectiveness of the proposed error reduction methods. We believe that these methods are able to facilitate high-accuracy PER measurement.

  15. Research on bit synchronization based on GNSS

    Science.gov (United States)

    Yu, Huanran; Liu, Yi-jun

    2017-05-01

    The signals transmitted by GPS satellites are divided into three components: carrier, pseudocode and data code. The processes of signal acquisition are acquisition, tracking, bit synchronization, frame synchronization, navigation message extraction, observation extraction and position speed calculation, among which bit synchronization is of greatest importance. The accuracy of bit synchronization and the shortening of bit synchronization time can help us to use satellite to realize positioning and acquire the information transmitted by satellite signals more accurately. Even under the condition of weak signal, how to improve bit synchronization performance is what we need to research. We adopt a method of polymorphic energy accumulation minima so as to find the bit synchronization point, as well as complete the computer simulation to conclude that under the condition of extremely weak signal power, this method still has superior synchronization performance, which can achieve high bit edge detection rate and the optimal bit error rate.

  16. Computational Package for Copolymerization Reactivity Ratio Estimation: Improved Access to the Error-in-Variables-Model

    Directory of Open Access Journals (Sweden)

    Alison J. Scott

    2018-01-01

    Full Text Available The error-in-variables-model (EVM is the most statistically correct non-linear parameter estimation technique for reactivity ratio estimation. However, many polymer researchers are unaware of the advantages of EVM and therefore still choose to use rather erroneous or approximate methods. The procedure is straightforward but it is often avoided because it is seen as mathematically and computationally intensive. Therefore, the goal of this work is to make EVM more accessible to all researchers through a series of focused case studies. All analyses employ a MATLAB-based computational package for copolymerization reactivity ratio estimation. The basis of the package is previous work in our group over many years. This version is an improvement, as it ensures wider compatibility and enhanced flexibility with respect to copolymerization parameter estimation scenarios that can be considered.

  17. A digital divider with extension bits for position-sensitive detectors

    International Nuclear Information System (INIS)

    Koike, Masaki; Hasegawa, Ken-ichi

    1988-01-01

    Digitizing errors produced in a digital divider for position-sensitive detectors have been reduced by adding extension bits to data bits. A relation between the extension bits and the data bits to obtain perfect position uniformity is also given. A digital divider employing 10 bit ADCs and 6 bit extension circuits has been constructed. (orig.)

  18. Intermediate-mass-ratio inspirals in the Einstein Telescope. II. Parameter estimation errors

    International Nuclear Information System (INIS)

    Huerta, E. A.; Gair, Jonathan R.

    2011-01-01

    We explore the precision with which the Einstein Telescope will be able to measure the parameters of intermediate-mass-ratio inspirals, i.e., the inspirals of stellar mass compact objects into intermediate-mass black holes (IMBHs). We calculate the parameter estimation errors using the Fisher Matrix formalism and present results of Monte Carlo simulations of these errors over choices for the extrinsic parameters of the source. These results are obtained using two different models for the gravitational waveform which were introduced in paper I of this series. These two waveform models include the inspiral, merger, and ringdown phases in a consistent way. One of the models, based on the transition scheme of Ori and Thorne [A. Ori and K. S. Thorne, Phys. Rev. D 62, 124022 (2000).], is valid for IMBHs of arbitrary spin; whereas, the second model, based on the effective-one-body approach, has been developed to cross-check our results in the nonspinning limit. In paper I of this series, we demonstrated the excellent agreement in both phase and amplitude between these two models for nonspinning black holes, and that their predictions for signal-to-noise ratios are consistent to within 10%. We now use these waveform models to estimate parameter estimation errors for binary systems with masses 1.4M · +100M · , 10M · +100M · , 1.4M · +500M · , and 10M · +500M · and various choices for the spin of the central IMBH. Assuming a detector network of three Einstein Telescopes, the analysis shows that for a 10M · compact object inspiralling into a 100M · IMBH with spin q=0.3, detected with a signal-to-noise ratio of 30, we should be able to determine the compact object and IMBH masses, and the IMBH spin magnitude to fractional accuracies of ∼10 -3 , ∼10 -3.5 , and ∼10 -3 , respectively. We also expect to determine the location of the source in the sky and the luminosity distance to within ∼0.003 steradians and ∼10%, respectively. We also compute results for

  19. Bit and Power Loading Approach for Broadband Multi-Antenna OFDM System

    DEFF Research Database (Denmark)

    Rahman, Muhammad Imadur; Das, Suvra S.; Wang, Yuanye

    2007-01-01

    In this work, we have studied bit and power allocation strategies for multi-antenna assisted Orthogonal Frequency Division Multiplexing (OFDM) systems and investigated the impact of different rates of bit and power allocations on various multi-antenna diversity schemes. It is observed that, if we......). Otherwise, it is possible to use adaptive power distribution to save power, which can be used for other purposes, or to increase the throughput of the system by transmitting higher number of bits. We also observed that in some scenarios and in some system conditions, some form of simultaneous bit and power...... cannot find the exact Signal to Noise Ratio (SNR) thresholds due to different reasons, such as reduced Link Adaptation (LA) rate, Channel State Information (CSI) error, feedback delay etc., it is better to fix the transmit power across all sub-channels to guarantee the target Frame Error Rate (FER...

  20. Efficient Bit-to-Symbol Likelihood Mappings

    Science.gov (United States)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  1. A Novel Digital Background Calibration Technique for 16 bit SHA-less Multibit Pipelined ADC

    Directory of Open Access Journals (Sweden)

    Swina Narula

    2016-01-01

    Full Text Available In this paper, a high resolution of 16 bit and high speed of 125MS/s, multibit Pipelined ADC with digital background calibration is presented. In order to achieve low power, SHA-less front end is used with multibit stages. The first and second stages are used here as a 3.5 bit and the stages from third to seventh are of 2.5 bit and last stage is of 3-bit flash ADC. After bit alignment and truncation of total 19 bits, 16 bits are used as final digital output. To precise the remove linear gain error of the residue amplifier and capacitor mismatching error, a digital background calibration technique is used, which is a combination of signal dependent dithering (SDD and butterfly shuffler. To improve settling time of residue amplifier, a special circuit of voltage separation is used. With the proposed digital background calibration technique, the spurious-free dynamic range (SFDR has been improved to 97.74 dB @30 MHz and 88.9 dB @150 MHz, and the signal-to-noise and distortion ratio (SNDR has been improved to 79.77 dB @ 30 MHz, and 73.5 dB @ 150 MHz. The implementation of the Pipelined ADC has been completed with technology parameters of 0.18μm CMOS process with 1.8 V supply. Total power consumption is 300 mW by the proposed ADC.

  2. 8 bit computer

    OpenAIRE

    Jankovskij, Robert

    2018-01-01

    In this paper the author looks into an eight bit computer structure and the computers components, their structure, pros and cons. An eight bit computer which can execute basic instructions and arithmetic operations such as addition and subtraction of eight bit numbers is built out of integrated circuits. Data transfers between computer components are monitored and reviewed.

  3. KEAMANAN CITRA DENGAN WATERMARKING MENGGUNAKAN PENGEMBANGAN ALGORITMA LEAST SIGNIFICANT BIT

    Directory of Open Access Journals (Sweden)

    Kurniawan Kurniawan

    2015-01-01

    Full Text Available Image security is a process to save digital. One method of securing image digital is watermarking using Least Significant Bit algorithm. Main concept of image security using LSB algorithm is to replace bit value of image at specific location so that created pattern. The pattern result of replacing the bit value of image is called by watermark. Giving watermark at image digital using LSB algorithm has simple concept so that the information which is embedded will lost easily when attacked such as noise attack or compression. So need modification like development of LSB algorithm. This is done to decrease distortion of watermark information against those attacks. In this research is divided by 6 process which are color extraction of cover image, busy area search, watermark embed, count the accuracy of watermark embed, watermark extraction, and count the accuracy of watermark extraction. Color extraction of cover image is process to get blue color component from cover image. Watermark information will embed at busy area by search the area which has the greatest number of unsure from cover image. Then watermark image is embedded into cover image so that produce watermarked image using some development of LSB algorithm and search the accuracy by count the Peak Signal to Noise Ratio value. Before the watermarked image is extracted, need to test by giving noise and doing compression into jpg format. The accuracy of extraction result is searched by count the Bit Error Rate value.

  4. Performance Ratios of Grid Connected Photovoltaic Systems and Theory of Errors

    Directory of Open Access Journals (Sweden)

    Javier Vilariño-García

    2016-07-01

    Full Text Available A detailed analysis of the different levels of dynamic performance of grid connected photovoltaic systems and its interface based on the development of a block diagram explaining the course of energy transformation from solar radiation incident on the solar modules until it becomes useful energy available in the mains. Indexes defined by the Spanish standard                 UNE-EN 61724: Monitoring photovoltaic systems: Guidelines for measurement, data exchange and analysis, are explained from the basics fundaments of block algebra and the transfer function of linear systems. The accuracy requirements demanded by the aforementioned standard for measuring these parameters are discussed in the theory of errors and the real limits of the results obtained. 

  5. Reducing Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models.

    Science.gov (United States)

    Salmerón, Diego; Cano, Juan A; Chirlaque, María D

    2015-08-30

    In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Bit corruption correlation and autocorrelation in a stochastic binary nano-bit system

    Science.gov (United States)

    Sa-nguansin, Suchittra

    2014-10-01

    The corruption process of a binary nano-bit model resulting from an interaction with N stochastically-independent Brownian agents (BAs) is studied with the help of Monte-Carlo simulations and analytic continuum theory to investigate the data corruption process through the measurement of the spatial two-point correlation and the autocorrelation of bit corruption at the origin. By taking into account a more realistic correlation between bits, this work will contribute to the understanding of the soft error or the corruption of data stored in nano-scale devices.

  7. Bit-padding information guided channel hopping

    KAUST Repository

    Yang, Yuli

    2011-02-01

    In the context of multiple-input multiple-output (MIMO) communications, we propose a bit-padding information guided channel hopping (BP-IGCH) scheme which breaks the limitation that the number of transmit antennas has to be a power of two based on the IGCH concept. The proposed scheme prescribes different bit-lengths to be mapped onto the indices of the transmit antennas and then uses padding technique to avoid error propagation. Numerical results and comparisons, on both the capacity and the bit error rate performances, are provided and show the advantage of the proposed scheme. The BP-IGCH scheme not only offers lower complexity to realize the design flexibility, but also achieves better performance. © 2011 IEEE.

  8. Ratio

    Science.gov (United States)

    Webster, Nathan A. S.; Pownceby, Mark I.; Madsen, Ian C.; Studer, Andrew J.; Manuel, James R.; Kimpton, Justin A.

    2014-12-01

    Effects of basicity, B (CaO:SiO2 ratio) on the thermal range, concentration, and formation mechanisms of silico-ferrite of calcium and aluminum (SFCA) and SFCA-I iron ore sinter bonding phases have been investigated using an in situ synchrotron X-ray diffraction-based methodology with subsequent Rietveld refinement-based quantitative phase analysis. SFCA and SFCA-I phases are the key bonding materials in iron ore sinter, and improved understanding of the effects of processing parameters such as basicity on their formation and decomposition may assist in improving efficiency of industrial iron ore sintering operations. Increasing basicity significantly increased the thermal range of SFCA-I, from 1363 K to 1533 K (1090 °C to 1260 °C) for a mixture with B = 2.48, to ~1339 K to 1535 K (1066 °C to 1262 °C) for a mixture with B = 3.96, and to ~1323 K to 1593 K (1050 °C to 1320 °C) at B = 4.94. Increasing basicity also increased the amount of SFCA-I formed, from 18 wt pct for the mixture with B = 2.48 to 25 wt pct for the B = 4.94 mixture. Higher basicity of the starting sinter mixture will, therefore, increase the amount of SFCA-I, considered to be more desirable of the two phases. Basicity did not appear to significantly influence the formation mechanism of SFCA-I. It did, however, affect the formation mechanism of SFCA, with the decomposition of SFCA-I coinciding with the formation of a significant amount of additional SFCA in the B = 2.48 and 3.96 mixtures but only a minor amount in the highest basicity mixture. In situ neutron diffraction enabled characterization of the behavior of magnetite after melting of SFCA produced a magnetite plus melt phase assemblage.

  9. High performance 14-bit pipelined redundant signed digit ADC

    Science.gov (United States)

    Narula, Swina; Pandey, Sujata

    2016-03-01

    A novel architecture of a pipelined redundant-signed-digit analog to digital converter (RSD-ADC) is presented featuring a high signal to noise ratio (SNR), spurious free dynamic range (SFDR) and signal to noise plus distortion (SNDR) with efficient background correction logic. The proposed ADC architecture shows high accuracy with a high speed circuit and efficient utilization of the hardware. This paper demonstrates the functionality of the digital correction logic of 14-bit pipelined ADC at each 1.5 bit/stage. This prototype of ADC architecture accounts for capacitor mismatch, comparator offset and finite Op-Amp gain error in the MDAC (residue amplification circuit) stages. With the proposed architecture of ADC, SNDR obtained is 85.89 dB, SNR is 85.9 dB and SFDR obtained is 102.8 dB at the sample rate of 100 MHz. This novel architecture of digital correction logic is transparent to the overall system, which is demonstrated by using 14-bit pipelined ADC. After a latency of 14 clocks, digital output will be available at every clock pulse. To describe the circuit behavior of the ADC, VHDL and MATLAB programs are used. The proposed architecture is also capable of reducing the digital hardware. Silicon area is also the complexity of the design.

  10. High performance 14-bit pipelined redundant signed digit ADC

    International Nuclear Information System (INIS)

    Narula, Swina; Pandey, Sujata

    2016-01-01

    A novel architecture of a pipelined redundant-signed-digit analog to digital converter (RSD-ADC) is presented featuring a high signal to noise ratio (SNR), spurious free dynamic range (SFDR) and signal to noise plus distortion (SNDR) with efficient background correction logic. The proposed ADC architecture shows high accuracy with a high speed circuit and efficient utilization of the hardware. This paper demonstrates the functionality of the digital correction logic of 14-bit pipelined ADC at each 1.5 bit/stage. This prototype of ADC architecture accounts for capacitor mismatch, comparator offset and finite Op-Amp gain error in the MDAC (residue amplification circuit) stages. With the proposed architecture of ADC, SNDR obtained is 85.89 dB, SNR is 85.9 dB and SFDR obtained is 102.8 dB at the sample rate of 100 MHz. This novel architecture of digital correction logic is transparent to the overall system, which is demonstrated by using 14-bit pipelined ADC. After a latency of 14 clocks, digital output will be available at every clock pulse. To describe the circuit behavior of the ADC, VHDL and MATLAB programs are used. The proposed architecture is also capable of reducing the digital hardware. Silicon area is also the complexity of the design. (paper)

  11. ANALISIS PENGARUH REKONFIGURASI GROUNDING KABEL POWER 20 kV TERHADAP ERROR RATIO CURRENT TRANSFORMERS PELANGGAN TEGANGAN MENENGAH DI HOTEL GOLDEN TULIP SEMINYAK

    Directory of Open Access Journals (Sweden)

    Kadek Amerta Yasa

    2017-10-01

    Full Text Available Hotel Golden Tulip Seminyak  merupakan pelanggan listrik tegangan menengah yang sangat potensial. Pada penelitian di hotel ini,  kami menemukan bahwa error ratio trafo arus sebelum rekonfigurasi grounding berkisar – 67% dan rugi-rugi daya sebesar - 214.809,00 W pada persentase beban 20%.  Error ratio memengaruhi arus yang terbaca di sisi sekunder trafo dan juga memengaruhi rugi daya.  Upaya yang dilakukan untuk mengurangi nilai error ratio dan rugi daya tersebut yaitu dengan melakukan rekonfigurasi grounding yaitu dengan memindahkan posisi tap grounding kabel power 20 kV masuk kembali melalui sisi primer trafo arus.  Nilai error ratio setelah rekonfigurasi grounding diperoleh berkisar – 0,05% dan  rugi-rugi daya - 20,18 W pada persentase beban 20%.

  12. High bit depth infrared image compression via low bit depth codecs

    Science.gov (United States)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  13. Practical Relativistic Bit Commitment.

    Science.gov (United States)

    Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Wehner, S; Zbinden, H

    2015-07-17

    Bit commitment is a fundamental cryptographic primitive in which Alice wishes to commit a secret bit to Bob. Perfectly secure bit commitment between two mistrustful parties is impossible through an asynchronous exchange of quantum information. Perfect security is, however, possible when Alice and Bob each split into several agents exchanging classical information at times and locations suitably chosen to satisfy specific relativistic constraints. In this Letter we first revisit a previously proposed scheme [C. Crépeau et al., Lect. Notes Comput. Sci. 7073, 407 (2011)] that realizes bit commitment using only classical communication. We prove that the protocol is secure against quantum adversaries for a duration limited by the light-speed communication time between the locations of the agents. We then propose a novel multiround scheme based on finite-field arithmetic that extends the commitment time beyond this limit, and we prove its security against classical attacks. Finally, we present an implementation of these protocols using dedicated hardware and we demonstrate a 2 ms-long bit commitment over a distance of 131 km. By positioning the agents on antipodal points on the surface of Earth, the commitment time could possibly be extended to 212 ms.

  14. 32-Bit FASTBUS computer

    International Nuclear Information System (INIS)

    Blossom, J.M.; Hong, J.P.; Kellner, R.G.

    1985-01-01

    Los Alamos National Laboratory is building a 32-bit FASTBUS computer using the NATIONAL SEMICONDUCTOR 32032 central processing unit (CPU) and containing 16 million bytes of memory. The board can act both as a FASTBUS master and as a FASTBUS slave. It contains a custom direct memory access (DMA) channel which can perform 80 million bytes per second block transfers across the FASTBUS

  15. Efficient computation of free energy of crystal phases due to external potentials by error-biased Bennett acceptance ratio method.

    Science.gov (United States)

    Apte, Pankaj A

    2010-02-28

    Free energy of crystal phases is commonly evaluated by thermodynamic integration along a reversible path that involves an external potential. However, this method suffers from the hysteresis caused by the differences in the center of mass position of the crystal phase in the presence and absence of the external potential. To alleviate this hysteresis, a constraint on the translational degrees of freedom of the crystal phase is imposed along the path and subsequently a correction term is added to the free energy to account for such a constraint. The estimation of the correction term is often computationally expensive. In this work, we propose a new methodology, termed as error-biased Bennett acceptance ratio method, which effectively solves this problem without the need to impose any constraint. This method is simple to implement and it does not require any modification to the path. We show the applicability of this method in the computation of crystal-melt interfacial energy by cleaving wall method [R. L. Davidchack and B. B. Laird, J. Chem. Phys. 118, 7651 (2003)] and bulk crystal-melt free energy difference by constrained fluid lambda-integration method [G. Grochola, J. Chem. Phys. 120, 2122 (2004)] for a model potential of silicon.

  16. low bit rate video coding low bit rate video coding

    African Journals Online (AJOL)

    eobe

    Variable length bit rate (VLBR) ariable length bit rate (VLBR) ariable length bit rate (VLBR) broadly encompasses video coding which broadly encompasses video coding which broadly encompasses video coding which mandates a temporal frequency of 10 mandates a temporal frequency of 10 frames per frames per ...

  17. A note on errors and signal to noise ratio of binary cross-correlation measurements of system impulse response

    International Nuclear Information System (INIS)

    Cummins, J.D.

    1964-02-01

    The sources of error in the measurement of system impulse response using test signals of a discrete interval binary nature are considered. Methods of correcting for the errors due to theoretical imperfections are given and the variance of the estimate of the system impulse response due to random noise is determined. Several topics related to the main topic are considered e.g. determination of a theoretical model from experimental results. General conclusions about the magnitude of the errors due to the theoretical imperfections are made. (author)

  18. Quantum dynamics of quantum bits

    International Nuclear Information System (INIS)

    Nguyen, Bich Ha

    2011-01-01

    The theory of coherent oscillations of the matrix elements of the density matrix of the two-state system as a quantum bit is presented. Different calculation methods are elaborated in the case of a free quantum bit. Then the most appropriate methods are applied to the study of the density matrices of the quantum bits interacting with a classical pumping radiation field as well as with the quantum electromagnetic field in a single-mode microcavity. The theory of decoherence of a quantum bit in Markovian approximation is presented. The decoherence of a quantum bit interacting with monoenergetic photons in a microcavity is also discussed. The content of the present work can be considered as an introduction to the study of the quantum dynamics of quantum bits. (review)

  19. The fission cross section ratios and error analysis for ten thorium, uranium, neptunium and plutonium isotopes at 14.74 MeV neutron energy

    International Nuclear Information System (INIS)

    Meadows, J.W.

    1987-03-01

    The error information from the recent measurements of the fission cross section ratios of nine isotopes, 230 Th, 232 Th, 233 U, 234 U, 236 U, 238 U, 237 Np, 239 Pu, and 242 Pu, relative to 235 U at 14.74 MeV neutron energy was used to calculate their correlations. The remaining 36 non-trivial and non-reciprocal cross section ratios and their errors were determined and compared to evaluated (ENDF/B-V) values. There are serious differences but it was concluded that the reduction of three of the evaluated cross sections would remove most of them. The cross sections to be reduced are 230 Th - 13%, 237 Np - 9.6% and 239 Pu - 7.6%. 5 refs., 6 tabs

  20. Test results judgment method based on BIT faults

    Directory of Open Access Journals (Sweden)

    Wang Gang

    2015-12-01

    Full Text Available Built-in-test (BIT is responsible for equipment fault detection, so the test data correctness directly influences diagnosis results. Equipment suffers all kinds of environment stresses, such as temperature, vibration, and electromagnetic stress. As embedded testing facility, BIT also suffers from these stresses and the interferences/faults are caused, so that the test course is influenced, resulting in incredible results. Therefore it is necessary to monitor test data and judge test failures. Stress monitor and BIT self-diagnosis would redound to BIT reliability, but the existing anti-jamming researches are mainly safeguard design and signal process. This paper focuses on test results monitor and BIT equipment (BITE failure judge, and a series of improved approaches is proposed. Firstly the stress influences on components are illustrated and the effects on the diagnosis results are summarized. Secondly a composite BIT program is proposed with information integration, and a stress monitor program is given. Thirdly, based on the detailed analysis of system faults and forms of BIT results, the test sequence control method is proposed. It assists BITE failure judge and reduces error probability. Finally the validation cases prove that these approaches enhance credibility.

  1. Impact of Model Error on the Measurement of Flow Properties Needed to Describe Flow Through Porous Media La répercussion de l'erreur de modèle sur la mesure des propriétés d'un débit nécessaires pour décrire ce dernier à travers un milieu poreux

    Directory of Open Access Journals (Sweden)

    Bentsen R. G.

    2006-12-01

    Full Text Available Indirect methods are commonly employed to determine the fundamental flow properties needed to describe flow through porous media. Consequently, if one or more of the postulates underlying the mathematical description of such indirect methods is invalid, significant model error can be introduced into the measured value of the flow property. In particular, this study shows that effective mobility curves that include the effect of viscous coupling between fluid phases differ significantly from those that exclude such coupling. Moreover, it is shown that the conventional effective mobilities that pertain to steady-state, cocurrent flow, steady-state, countercurrent flow and pure countercurrent imbibition differ significantly. Thus, it appears that traditional effective mobilities are not true parameters; rather, they are infinitely nonunique. In addition, it is shown that, while neglect of hydrodynamic forces introduces a small amount of model error into the pressure difference curve for cocurrent flow in unconsolidated porous media, such neglect introduces a large amount of model error into the pressure difference curve for countercurrent flow in such porous media. Moreover, such neglect makes it difficult to explain why the pressure gradients that pertain to steady-state, countercurrent flow are opposite in sign. It is shown also that improper handling of the inlet boundary condition can introduce significant model error into the analysis. This is because, if a short core is used with one of the unsteady-state methods for determining effective mobility, it may take many pore volumes of injection before the inlet saturation rises to its maximal value, which is in contradiction with the usual assumption that the inlet saturation rises immediately to its maximal value. Finally, it is pointed out that, because of differences in flow regime and scale, the effective mobilities measured in the laboratory may not be appropriate for inclusion in the data

  2. Quantum bit commitment protocol without quantum memory

    OpenAIRE

    Ramos, Rubens Viana; Mendonca, Fabio Alencar

    2008-01-01

    Quantum protocols for bit commitment have been proposed and it is largely accepted that unconditionally secure quantum bit commitment is not possible; however, it can be more secure than classical bit commitment. In despite of its usefulness, quantum bit commitment protocols have not been experimentally implemented. The main reason is the fact that all proposed quantum bit commitment protocols require quantum memory. In this work, we show a quantum bit commitment protocol that does not requir...

  3. MDR-ER: balancing functions for adjusting the ratio in risk classes and classification errors for imbalanced cases and controls using multifactor-dimensionality reduction.

    Directory of Open Access Journals (Sweden)

    Cheng-Hong Yang

    Full Text Available BACKGROUND: Determining the complex relationship between diseases, polymorphisms in human genes and environmental factors is challenging. Multifactor dimensionality reduction (MDR has proven capable of effectively detecting statistical patterns of epistasis. However, MDR has its weakness in accurately assigning multi-locus genotypes to either high-risk and low-risk groups, and does generally not provide accurate error rates when the case and control data sets are imbalanced. Consequently, results for classification error rates and odds ratios (OR may provide surprising values in that the true positive (TP value is often small. METHODOLOGY/PRINCIPAL FINDINGS: To address this problem, we introduce a classifier function based on the ratio between the percentage of cases in case data and the percentage of controls in control data to improve MDR (MDR-ER for multi-locus genotypes to be classified correctly into high-risk and low-risk groups. In this study, a real data set with different ratios of cases to controls (1:4 was obtained from the mitochondrial D-loop of chronic dialysis patients in order to test MDR-ER. The TP and TN values were collected from all tests to analyze to what degree MDR-ER performed better than MDR. CONCLUSIONS/SIGNIFICANCE: Results showed that MDR-ER can be successfully used to detect the complex associations in imbalanced data sets.

  4. MDR-ER: balancing functions for adjusting the ratio in risk classes and classification errors for imbalanced cases and controls using multifactor-dimensionality reduction.

    Science.gov (United States)

    Yang, Cheng-Hong; Lin, Yu-Da; Chuang, Li-Yeh; Chen, Jin-Bor; Chang, Hsueh-Wei

    2013-01-01

    Determining the complex relationship between diseases, polymorphisms in human genes and environmental factors is challenging. Multifactor dimensionality reduction (MDR) has proven capable of effectively detecting statistical patterns of epistasis. However, MDR has its weakness in accurately assigning multi-locus genotypes to either high-risk and low-risk groups, and does generally not provide accurate error rates when the case and control data sets are imbalanced. Consequently, results for classification error rates and odds ratios (OR) may provide surprising values in that the true positive (TP) value is often small. To address this problem, we introduce a classifier function based on the ratio between the percentage of cases in case data and the percentage of controls in control data to improve MDR (MDR-ER) for multi-locus genotypes to be classified correctly into high-risk and low-risk groups. In this study, a real data set with different ratios of cases to controls (1:4) was obtained from the mitochondrial D-loop of chronic dialysis patients in order to test MDR-ER. The TP and TN values were collected from all tests to analyze to what degree MDR-ER performed better than MDR. Results showed that MDR-ER can be successfully used to detect the complex associations in imbalanced data sets.

  5. Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation

    Science.gov (United States)

    Swift, G.

    2002-01-01

    JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.

  6. Performance comparison of weighted sum-minimum mean square error and virtual signal-to-interference plus noise ratio algorithms in simulated and measured channels

    DEFF Research Database (Denmark)

    Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels

    2014-01-01

    in multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results.......A comparison in data achievement between two well-known algorithms with simulated and real measured data is presented. The algorithms maximise the data rate in cooperative base stations (BS) multiple-input-single-output scenario. Weighted sum-minimum mean square error algorithm could be used...

  7. A Holistic Approach to Bit Preservation

    DEFF Research Database (Denmark)

    Zierau, Eld Maj-Britt Olmütz

    2011-01-01

    This thesis presents three main results for a holistic approach to bit preservation, where the ultimate goal is to find the optimal bit preservation strategy for specific digital material that must be digitally preserved. Digital material consists of sequences of bits, where a bit is a binary digit...... which can have the value 0 or 1. Bit preservation must ensure that the bits remain intact and readable in the future, but bit preservation is not concerned with how bits can be interpreted as e.g. an image. A holistic approach to bit preservation includes aspects that influence the final choice of a bit...... preservation strategy. This can be aspects of how the permanent access to the digital material must be ensured. It can also be aspects of how the material must be treated as part of using it. This includes aspects related to how the digital material to be bit preserved is represented, as well as requirements...

  8. Bits and q-bits as versatility measures

    Directory of Open Access Journals (Sweden)

    José R.C. Piqueira

    2004-06-01

    Full Text Available Using Shannon information theory is a common strategy to measure any kind of variability in a signal or phenomenon. Some methods were developed to adapt information entropy measures to bird song data trying to emphasize its versatility aspect. This classical approach, using the concept of bit, produces interesting results. Now, the original idea developed in this paper is to use the quantum information theory and the quantum bit (q-bit concept in order to provide a more complete vision of the experimental results.Usar a teoria da informação de Shannon é uma estratégia comum para medir todo tipo de variabilidade em um sinal ou fenômeno. Alguns métodos foram desenvolvidos para adaptar a medida de entropia informacional a dados de cantos de pássaro, tentando enfatizar seus aspectos de versatilidade. Essa abordagem clássica, usando o conceito de bit, produz resultados interessantes. Agora, a idéia original desenvolvida neste artigo é usar a teoria quântica da informação e o conceito de q-bit, com a finalidade de proporcionar uma visão mais completa dos resultados experimentais.

  9. Gas analyzer’s drift leads to systematic error in maximal oxygen uptake and maximal respiratory exchange ratio determination

    Directory of Open Access Journals (Sweden)

    Ibai eGarcia-Tabar

    2015-10-01

    Full Text Available The aim was to examine the drift in the measurements of fractional concentration of oxygen (FO2 and carbon dioxide (FCO2 of a Nafion-using metabolic cart during incremental maximal exercise in 18 young and 12 elderly males, and to propose a way in which the drift can be corrected. The drift was verified by comparing the pre-test calibration values with the immediate post-test verification values of the calibration gases. The system demonstrated an average downscale drift (P < 0.001 in FO2 and FCO2 of -0.18% and -0.05%, respectively. Compared with measured values, corrected average maximal oxygen uptake values were 5-6% lower (P < 0.001 whereas corrected maximal respiratory exchange ratio values were 8-9% higher (P < 0.001. The drift was not due to an electronic instability in the analyzers because it was reverted after 20 minutes of recovery from the end of the exercise. The drift may be related to an incomplete removal of water vapor from the expired gas during transit through the Nafion conducting tube. These data demonstrate the importance of checking FO2 and FCO2 values by regular pre-test calibrations and post-test verifications, and also the importance of correcting a possible shift immediately after exercise.

  10. Bit-string scattering theory

    Energy Technology Data Exchange (ETDEWEB)

    Noyes, H.P.

    1990-01-29

    We construct discrete space-time coordinates separated by the Lorentz-invariant intervals h/mc in space and h/mc{sup 2} in time using discrimination (XOR) between pairs of independently generated bit-strings; we prove that if this space is homogeneous and isotropic, it can have only 1, 2 or 3 spacial dimensions once we have related time to a global ordering operator. On this space we construct exact combinatorial expressions for free particle wave functions taking proper account of the interference between indistinguishable alternative paths created by the construction. Because the end-points of the paths are fixed, they specify completed processes; our wave functions are born collapsed''. A convenient way to represent this model is in terms of complex amplitudes whose squares give the probability for a particular set of observable processes to be completed. For distances much greater than h/mc and times much greater than h/mc{sup 2} our wave functions can be approximated by solutions of the free particle Dirac and Klein-Gordon equations. Using a eight-counter paradigm we relate this construction to scattering experiments involving four distinguishable particles, and indicate how this can be used to calculate electromagnetic and weak scattering processes. We derive a non-perturbative formula relating relativistic bound and resonant state energies to mass ratios and coupling constants, equivalent to our earlier derivation of the Bohr relativistic formula for hydrogen. Using the Fermi-Yang model of the pion as a relativistic bound state containing a nucleon-antinucleon pair, we find that (G{sub {pi}N}{sup 2}){sup 2} = (2m{sub N}/m{sub {pi}}){sup 2} {minus} 1. 21 refs., 1 fig.

  11. Bit-string scattering theory

    International Nuclear Information System (INIS)

    Noyes, H.P.

    1990-01-01

    We construct discrete space-time coordinates separated by the Lorentz-invariant intervals h/mc in space and h/mc 2 in time using discrimination (XOR) between pairs of independently generated bit-strings; we prove that if this space is homogeneous and isotropic, it can have only 1, 2 or 3 spacial dimensions once we have related time to a global ordering operator. On this space we construct exact combinatorial expressions for free particle wave functions taking proper account of the interference between indistinguishable alternative paths created by the construction. Because the end-points of the paths are fixed, they specify completed processes; our wave functions are ''born collapsed''. A convenient way to represent this model is in terms of complex amplitudes whose squares give the probability for a particular set of observable processes to be completed. For distances much greater than h/mc and times much greater than h/mc 2 our wave functions can be approximated by solutions of the free particle Dirac and Klein-Gordon equations. Using a eight-counter paradigm we relate this construction to scattering experiments involving four distinguishable particles, and indicate how this can be used to calculate electromagnetic and weak scattering processes. We derive a non-perturbative formula relating relativistic bound and resonant state energies to mass ratios and coupling constants, equivalent to our earlier derivation of the Bohr relativistic formula for hydrogen. Using the Fermi-Yang model of the pion as a relativistic bound state containing a nucleon-antinucleon pair, we find that (G πN 2 ) 2 = (2m N /m π ) 2 - 1. 21 refs., 1 fig

  12. Flexible Bit Preservation on a National Basis

    DEFF Research Database (Denmark)

    Jurik, Bolette; Nielsen, Anders Bo; Zierau, Eld

    2012-01-01

    In this paper we present the results from The Danish National Bit Repository project. The project aim was establishment of a system that can offer flexible and sustainable bit preservation solutions to Danish cultural heritage institutions. Here the bit preservation solutions must include support...... of bit safety as well as other requirements like e.g. confidentiality and availability. The Danish National Bit Repository is motivated by the need to investigate and handle bit preservation for digital cultural heritage. Digital preservation relies on the integrity of the bits which digital material...

  13. Very low bit rate video coding of moving targets

    Science.gov (United States)

    Garcia, Jose A.; Rodriguez-Sanchez, Rosa; Fdez-Valdivia, Joaquin; Martinez-Baena, Javier

    2006-03-01

    We propose a video coding scheme to improve moving-target detection at very low bit rate, based on two key features: energy-based quantizer formation, and optimized interquantizer and intraquantizer prioritization. Rational Embedded Wavelet Video Coding (REVIC) is a fully implemented software video codec of low complexity and without motion compensated filtering to provide additional simplicity, adaptivity, and error resilience. It is shown to be quite effective in video coding of moving targets (e.g., military vehicles) at very low bit rates, while retaining the attributes of complete embeddedness for progressive transmission and scalability by fidelity and resolution. The proposed coding technique improves the explanatory power of decoded sequences (to achieve maximum target detection versus bit-rate performance) for a video compression system. The explanatory power of compressed sequences is important in surveillance applications, where trained video analysts may utilize decoded sequences to support decision processes in strategic, operational, and tactical tasks.

  14. Effects of cane length and diameter and judgment type on the constant error ratio for estimated height in blindfolded, visually impaired, and sighted participants.

    Science.gov (United States)

    Huang, Kuo-Chen; Leung, Cherng-Yee; Wang, Hsiu-Feng

    2010-04-01

    The purpose of this study was to assess the ability of blindfolded, visually impaired, and sighted individuals to estimate object height as a function of cane length, cane diameter, and judgment type. 48 undergraduate students (ages 20 to 23 years) were recruited to participate in the study. Participants were divided into low-vision, severely myopic, and normal-vision groups. Five stimulus heights were explored with three cane lengths, varying cane diameters, and judgment types. The participants were asked to estimate the stimulus height with or without reference to a standard block. Results showed that the constant error ratio for estimated height improved with decreasing cane length and comparative judgment. The findings were unclear regarding the effect of cane length on haptic perception of height. Implications were discussed for designing environments, such as stair heights, chairs, the magnitude of apertures, etc., for visually impaired individuals.

  15. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    Science.gov (United States)

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  16. Cheat Sensitive Quantum Bit Commitment

    OpenAIRE

    Hardy, Lucien; Kent, Adrian

    1999-01-01

    We define cheat sensitive cryptographic protocols between mistrustful parties as protocols which guarantee that, if either cheats, the other has some nonzero probability of detecting the cheating. We give an example of an unconditionally secure cheat sensitive non-relativistic bit commitment protocol which uses quantum information to implement a task which is classically impossible; we also describe a simple relativistic protocol.

  17. Cheat sensitive quantum bit commitment.

    Science.gov (United States)

    Hardy, Lucien; Kent, Adrian

    2004-04-16

    We define cheat sensitive cryptographic protocols between mistrustful parties as protocols which guarantee that, if either cheats, the other has some nonzero probability of detecting the cheating. We describe an unconditionally secure cheat sensitive nonrelativistic bit commitment protocol which uses quantum information to implement a task which is classically impossible; we also describe a simple relativistic protocol.

  18. Hey! A Louse Bit Me!

    Science.gov (United States)

    ... of a sesame seed, and are tan to gray in color. Lice need to suck a tiny bit of blood to survive, and they sometimes live on people's heads and lay eggs in the hair , on the back of the neck, or behind ...

  19. DSC and universal bit-level combining for HARQ systems

    Directory of Open Access Journals (Sweden)

    Lv Tiejun

    2011-01-01

    Full Text Available Abstract This paper proposes a Dempster -Shafer theory based combining scheme for single-input single-output (SISO systems with hybrid automatic retransmission request (HARQ, referred to as DSC, in which two methods for soft information calculations are developed for equiprobable (EP and non-equiprobable (NEP sources, respectively. One is based on the distance from the received signal to the decision candidate set consisting of adjacent constellation points when the source bits are equiprobable, and the corresponding DSC is regarded as DSC-D. The other is based on the posterior probability of the transmitted signals when the priori probability for the NEP source bits is available, and the corresponding DSC is regarded as DSC-APP. For the diverse EP and NEP source cases, both DSCD and DSC-APP are superior to maximal ratio combining, the so-called optimal combining scheme for SISO systems. Moreover, the robustness of the proposed DSC is illustrated by the simulations performed in Rayleigh channel and AWGN channel, respectively. The results show that the proposed DSC is insensitive to and especially applicable to the fading channels. In addition, a DS detection-aided bit-level DS combining scheme is proposed for multiple-input multiple-output--HARQ systems. The bit-level DS combining is deduced to be a universal scheme, and the traditional log-likelihood-ratio combining is a special case when the likelihood probability is used as bit-level soft information.

  20. Parity Bit Replenishment for JPEG 2000-Based Video Streaming

    Directory of Open Access Journals (Sweden)

    François-Olivier Devaux

    2009-01-01

    Full Text Available This paper envisions coding with side information to design a highly scalable video codec. To achieve fine-grained scalability in terms of resolution, quality, and spatial access as well as temporal access to individual frames, the JPEG 2000 coding algorithm has been considered as the reference algorithm to encode INTRA information, and coding with side information has been envisioned to refresh the blocks that change between two consecutive images of a video sequence. One advantage of coding with side information compared to conventional closed-loop hybrid video coding schemes lies in the fact that parity bits are designed to correct stochastic errors and not to encode deterministic prediction errors. This enables the codec to support some desynchronization between the encoder and the decoder, which is particularly helpful to adapt on the fly pre-encoded content to fluctuating network resources and/or user preferences in terms of regions of interest. Regarding the coding scheme itself, to preserve both quality scalability and compliance to the JPEG 2000 wavelet representation, a particular attention has been devoted to the definition of a practical coding framework able to exploit not only the temporal but also spatial correlation among wavelet subbands coefficients, while computing the parity bits on subsets of wavelet bit-planes. Simulations have shown that compared to pure INTRA-based conditional replenishment solutions, the addition of the parity bits option decreases the transmission cost in terms of bandwidth, while preserving access flexibility.

  1. A holistic approach to bit preservation

    DEFF Research Database (Denmark)

    Zierau, Eld

    2012-01-01

    Purpose: The purpose of this paper is to point out the importance of taking a holistic approach to bit preservation when setting out to find an optimal bit preservation solution for specific digital materials. In the last decade there has been an increasing awareness that bit preservation, which...... is to keep bits intact and readable, is far more complex than first anticipated, even in this narrow definition. This paper takes a more holistic approach to bit preservation, and looks at how an optimal bit preservation strategy can be found, when requirements like confidentiality, availability and costs...... are taken into account. Design/methodology/approach: The paper describes the various findings from previous research which have led to the holistic approach to bit preservation. This paper also includes an introduction to digital preservation with a focus on the role of bit preservation, which sets...

  2. Flexible Bit Preservation on a National Basis

    DEFF Research Database (Denmark)

    Jurik, Bolette; Nielsen, Anders Bo; Zierau, Eld

    2012-01-01

    In this paper we present the results from The Danish National Bit Repository project. The project aim was establishment of a system that can offer flexible and sustainable bit preservation solutions to Danish cultural heritage institutions. Here the bit preservation solutions must include support...... of bit safety as well as other requirements like e.g. confidentiality and availability. The Danish National Bit Repository is motivated by the need to investigate and handle bit preservation for digital cultural heritage. Digital preservation relies on the integrity of the bits which digital material...... consists of, and it is with this focus that the project was initiated. This paper summarizes the requirements for a general system to offer bit preservation to cultural heritage institutions. On this basis the paper describes the resulting flexible system which can support such requirements. The paper...

  3. Capped bit patterned media for high density magnetic recording

    Science.gov (United States)

    Li, Shaojing; Livshitz, Boris; Bertram, H. Neal; Inomata, Akihiro; Fullerton, Eric E.; Lomakin, Vitaliy

    2009-04-01

    A capped composite patterned medium design is described which comprises an array of hard elements exchange coupled to a continuous cap layer. The role of the cap layer is to lower the write field of the individual hard element and introduce ferromagnetic exchange interactions between hard elements to compensate the magnetostatic interactions. Modeling results show significant reduction in the reversal field distributions caused by the magnetization states in the array which is important to prevent bit errors and increase achievable recording densities.

  4. Lowering Error Floors by Concatenation of Low-Density Parity-Check and Array Code

    OpenAIRE

    Cinna Soltanpur; Mohammad Ghamari; Behzad Momahed Heravi; Fatemeh Zare

    2017-01-01

    Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and ...

  5. Pulse Sign Separation Technique for the Received Bits in Wireless Ultra-Wideband Combination Approach

    Directory of Open Access Journals (Sweden)

    Rashid A. Fayadh

    2014-01-01

    Full Text Available When receiving high data rate in ultra-wideband (UWB technology, many users have experienced multiple-user interference and intersymbol interference in the multipath reception technique. Structures have been proposed for implementing rake receivers to enhance their capabilities by reducing the bit error probability (Pe, thereby providing better performances by indoor and outdoor multipath receivers. As a result, several rake structures have been proposed in the past to reduce the number of resolvable paths that must be estimated and combined. To achieve this aim, we suggest two maximal ratio combiners based on the pulse sign separation technique, such as the pulse sign separation selective combiner (PSS-SC and the pulse sign separation partial combiner (PSS-PC to reduce complexity with fewer fingers and to improve the system performance. In the combiners, a comparator was added to compare the positive quantity of positive pulses and negative quantities of negative pulses to decide whether the transmitted bit was 1 or 0. The Pe was driven by simulation for multipath environments for impulse radio time-hopping binary phase shift keying (TH-BPSK modulation, and the results were compared with those of conventional selective combiners (C-SCs and conventional partial combiners (C-PCs.

  6. ERROR-CONTROL CODING OF ADS-B MESSAGES FOR IRIDIUM SATELLITES

    Directory of Open Access Journals (Sweden)

    Volodymyr Kharchenko

    2013-12-01

    Full Text Available For modelling of ADS-B messages transmitting on the base of low-orbit satellite constellation Іrіdіum the model of a communication channel “Aircraft - Satellite - Ground Station” was built using MATLAB Sіmulіnk. This model allowed to investigate dependences of the Bit Error Rate on a type of  signal coding/decoding, ratio Eb/N0 and satellite repeater gain

  7. Differential-phase-shift quantum key distribution experiment using fast physical random bit generator with chaotic semiconductor lasers.

    Science.gov (United States)

    Honjo, Toshimori; Uchida, Atsushi; Amano, Kazuya; Hirano, Kunihito; Someya, Hiroyuki; Okumura, Haruka; Yoshimura, Kazuyuki; Davis, Peter; Tokura, Yasuhiro

    2009-05-25

    A high speed physical random bit generator is applied for the first time to a gigahertz clocked quantum key distribution system. Random phase-modulation in a differential-phase-shift quantum key distribution (DPS-QKD) system is performed using a 1-Gbps random bit signal which is generated by a physical random bit generator with chaotic semiconductor lasers. Stable operation is demonstrated for over one hour, and sifted keys are successfully generated at a rate of 9.0 kbps with a quantum bit error rate of 3.2% after 25-km fiber transmission.

  8. Estimation of entropy rate in a fast physical random-bit generator using a chaotic semiconductor laser with intrinsic noise.

    Science.gov (United States)

    Mikami, Takuya; Kanno, Kazutaka; Aoyama, Kota; Uchida, Atsushi; Ikeguchi, Tohru; Harayama, Takahisa; Sunada, Satoshi; Arai, Ken-ichi; Yoshimura, Kazuyuki; Davis, Peter

    2012-01-01

    We analyze the time for growth of bit entropy when generating nondeterministic bits using a chaotic semiconductor laser model. The mechanism for generating nondeterministic bits is modeled as a 1-bit sampling of the intensity of light output. Microscopic noise results in an ensemble of trajectories whose bit entropy increases with time. The time for the growth of bit entropy, called the memory time, depends on both noise strength and laser dynamics. It is shown that the average memory time decreases logarithmically with increase in noise strength. It is argued that the ratio of change in average memory time with change in logarithm of noise strength can be used to estimate the intrinsic dynamical entropy rate for this method of random bit generation. It is also shown that in this model the entropy rate corresponds to the maximum Lyapunov exponent.

  9. In bits, bytes and stone

    DEFF Research Database (Denmark)

    Sabra, Jakob Borrits; Andersen, Hans Jørgen

    designs'. Urns, coffins, graves, cemeteries, memorials, monuments, websites, applications and software services, whether cut in stone or made of bits, are all influenced by discourses of publics, economics, power, technology and culture. Designers, programmers, stakeholders and potential end-users often....... The findings in this paper are contextualized through a qualitative ethnographic research design based on Danish cemetery users and mourners and their different experiences with and attitudes towards new online grief, mourning and remembrance designs, platforms, services and initiatives. Additionally...... constitute parts of an intricately weaved and interrelated network of practices dealing with death, mourning, memorialization and remembrance. Design pioneering company IDEO'S recent failed attempt to 'redesign death' is an example of how delicate and difficult it is to work with digital and symbolic 'death...

  10. FastBit Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng

    2007-08-02

    An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. The compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.

  11. Stinger Enhanced Drill Bits For EGS

    Energy Technology Data Exchange (ETDEWEB)

    Durrand, Christopher J. [Novatek International, Inc., Provo, UT (United States); Skeem, Marcus R. [Novatek International, Inc., Provo, UT (United States); Crockett, Ron B. [Novatek International, Inc., Provo, UT (United States); Hall, David R. [Novatek International, Inc., Provo, UT (United States)

    2013-04-29

    The project objectives were to design, engineer, test, and commercialize a drill bit suitable for drilling in hard rock and high temperature environments (10,000 meters) likely to be encountered in drilling enhanced geothermal wells. The goal is provide a drill bit that can aid in the increased penetration rate of three times over conventional drilling. Novatek has sought to leverage its polycrystalline diamond technology and a new conical cutter shape, known as the Stinger®, for this purpose. Novatek has developed a fixed bladed bit, known as the JackBit®, populated with both shear cutter and Stingers that is currently being tested by major drilling companies for geothermal and oil and gas applications. The JackBit concept comprises a fixed bladed bit with a center indenter, referred to as the Jack. The JackBit has been extensively tested in the lab and in the field. The JackBit has been transferred to a major bit manufacturer and oil service company. Except for the attached published reports all other information is confidential.

  12. Hey! A Mosquito Bit Me! (For Kids)

    Science.gov (United States)

    ... First Aid & Safety Doctors & Hospitals Videos Recipes for Kids Kids site Sitio para niños How the Body Works ... Español Hey! A Mosquito Bit Me! KidsHealth / For Kids / Hey! A Mosquito Bit Me! Print en español ¡ ...

  13. Optimal multitone bit allocation for fixed-rate video transmission over ADSL

    Science.gov (United States)

    Antonini, Marc; Moureaux, Jean-Marie; Lecuire, Vincent

    2002-01-01

    In this paper we propose a novel approach for the bit allocation performed in an ADSL modulator. This new method is based on the observation that the transmission speed using ADSL strongly depends on the distance between the central office and the subscriber's side and does not permit real-time transmission for high bitrate video on long distances. The algorithm we develop takes into account the characteristics of a video sequence and distributes the channel error according to visual sensitivity. This method involves variable transmission Bit Error Rate.

  14. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  15. High bit depth infrared image compression via low bit depth codecs

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped...... with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth...... images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed...

  16. PERBANDINGAN APLIKASI MENGGUNAKAN METODE CAMELLIA 128 BIT KEY DAN 256 BIT KEY

    Directory of Open Access Journals (Sweden)

    Lanny Sutanto

    2014-01-01

    Full Text Available The rapid development of the Internet today to easily exchange data. This leads to high levels of risk in the data piracy. One of the ways to secure data is using cryptography camellia. Camellia is known as a method that has the encryption and decryption time is fast. Camellia method has three kinds of scale key is 128 bit, 192 bit, and 256 bit.This application is created using the C++ programming language and using visual studio 2010 GUI. This research compare the smallest and largest key size used on the file extension .Txt, .Doc, .Docx, .Jpg, .Mp4, .Mkv and .Flv. This application is made to comparing time and level of security in the use of 128-bit key and 256 bits. The comparison is done by comparing the results of the security value of avalanche effect 128 bit key and 256 bit key.

  17. Digital sound: Subjective tests on low bit-rate codecs

    Science.gov (United States)

    Gilchrist, N. H. C.

    At the beginning of 1990, BBC Research Department tested four experimental high-quality low bit-rate audio codecs which were under development as part of the Eureka 147 Digital Audio Broadcasting project. The work involved preliminary listening tests to identify critical test material, followed by formal subjective tests to determine audio quality and error performance. The listeners could detect some loss of audio quality with all of the codecs using the most critical material. There were also indications that one of the codecs did not always reproduce the phantom sound sources in their correct position.

  18. Semifragile Speech Watermarking Based on Least Significant Bit Replacement of Line Spectral Frequencies

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Nematollahi

    2017-01-01

    Full Text Available There are various techniques for speech watermarking based on modifying the linear prediction coefficients (LPCs; however, the estimated and modified LPCs vary from each other even without attacks. Because line spectral frequency (LSF has less sensitivity to watermarking than LPC, watermark bits are embedded into the maximum number of LSFs by applying the least significant bit replacement (LSBR method. To reduce the differences between estimated and modified LPCs, a checking loop is added to minimize the watermark extraction error. Experimental results show that the proposed semifragile speech watermarking method can provide high imperceptibility and that any manipulation of the watermark signal destroys the watermark bits since manipulation changes it to a random stream of bits.

  19. Digital Signal Processing For Low Bit Rate TV Image Codecs

    Science.gov (United States)

    Rao, K. R.

    1987-06-01

    In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.

  20. Changes realized from extended bit-depth and metal artifact reduction in CT

    Energy Technology Data Exchange (ETDEWEB)

    Glide-Hurst, C.; Chen, D.; Zhong, H.; Chetty, I. J. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan 48202 (United States)

    2013-06-15

    Purpose: High-Z material in computed tomography (CT) yields metal artifacts that degrade image quality and may cause substantial errors in dose calculation. This study couples a metal artifact reduction (MAR) algorithm with enhanced 16-bit depth (vs standard 12-bit) to quantify potential gains in image quality and dosimetry. Methods: Extended CT to electron density (CT-ED) curves were derived from a tissue characterization phantom with titanium and stainless steel inserts scanned at 90-140 kVp for 12- and 16-bit reconstructions. MAR was applied to sinogram data (Brilliance BigBore CT scanner, Philips Healthcare, v.3.5). Monte Carlo simulation (MC-SIM) was performed on a simulated double hip prostheses case (Cerrobend rods embedded in a pelvic phantom) using BEAMnrc/Dosxyz (400 000 0000 histories, 6X, 10 Multiplication-Sign 10 cm{sup 2} beam traversing Cerrobend rod). A phantom study was also conducted using a stainless steel rod embedded in solid water, and dosimetric verification was performed with Gafchromic film analysis (absolute difference and gamma analysis, 2% dose and 2 mm distance to agreement) for plans calculated with Anisotropic Analytic Algorithm (AAA, Eclipse v11.0) to elucidate changes between 12- and 16-bit data. Three patients (bony metastases to the femur and humerus, and a prostate cancer case) with metal implants were reconstructed using both bit depths, with dose calculated using AAA and derived CT-ED curves. Planar dose distributions were assessed via matrix analyses and using gamma criteria of 2%/2 mm. Results: For 12-bit images, CT numbers for titanium and stainless steel saturated at 3071 Hounsfield units (HU), whereas for 16-bit depth, mean CT numbers were much larger (e.g., titanium and stainless steel yielded HU of 8066.5 {+-} 56.6 and 13 588.5 {+-} 198.8 for 16-bit uncorrected scans at 120 kVp, respectively). MC-SIM was well-matched between 12- and 16-bit images except downstream of the Cerrobend rod, where 16-bit dose was {approx}6

  1. Performance of an Error Control System with Turbo Codes in Powerline Communications

    Directory of Open Access Journals (Sweden)

    Balbuena-Campuzano Carlos Alberto

    2014-07-01

    Full Text Available This paper reports the performance of turbo codes as an error control technique in PLC (Powerline Communications data transmissions. For this system, computer simulations are used for modeling data networks based on the model classified in technical literature as indoor, and uses OFDM (Orthogonal Frequency Division Multiplexing as a modulation technique. Taking into account the channel, modulation and turbo codes, we propose a methodology to minimize the bit error rate (BER, as a function of the average received signal noise ratio (SNR.

  2. A 1 GHz sample rate, 256-channel, 1-bit quantization, CMOS, digital correlator chip

    Science.gov (United States)

    Timoc, C.; Tran, T.; Wongso, J.

    1992-01-01

    This paper describes the development of a digital correlator chip with the following features: 1 Giga-sample/second; 256 channels; 1-bit quantization; 32-bit counters providing up to 4 seconds integration time at 1 GHz; and very low power dissipation per channel. The improvements in the performance-to-cost ratio of the digital correlator chip are achieved with a combination of systolic architecture, novel pipelined differential logic circuits, and standard 1.0 micron CMOS process.

  3. BitPAl: a bit-parallel, general integer-scoring sequence alignment algorithm.

    Science.gov (United States)

    Loving, Joshua; Hernandez, Yozen; Benson, Gary

    2014-11-15

    Mapping of high-throughput sequencing data and other bulk sequence comparison applications have motivated a search for high-efficiency sequence alignment algorithms. The bit-parallel approach represents individual cells in an alignment scoring matrix as bits in computer words and emulates the calculation of scores by a series of logic operations composed of AND, OR, XOR, complement, shift and addition. Bit-parallelism has been successfully applied to the longest common subsequence (LCS) and edit-distance problems, producing fast algorithms in practice. We have developed BitPAl, a bit-parallel algorithm for general, integer-scoring global alignment. Integer-scoring schemes assign integer weights for match, mismatch and insertion/deletion. The BitPAl method uses structural properties in the relationship between adjacent scores in the scoring matrix to construct classes of efficient algorithms, each designed for a particular set of weights. In timed tests, we show that BitPAl runs 7-25 times faster than a standard iterative algorithm. Source code is freely available for download at http://lobstah.bu.edu/BitPAl/BitPAl.html. BitPAl is implemented in C and runs on all major operating systems. jloving@bu.edu or yhernand@bu.edu or gbenson@bu.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  4. Bit Loading Algorithms for Cooperative OFDM Systems

    Directory of Open Access Journals (Sweden)

    Gui Bo

    2008-01-01

    Full Text Available Abstract We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.

  5. FastBit: Interactively Searching Massive Data

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Ahern, Sean; Bethel, E. Wes; Chen, Jacqueline; Childs, Hank; Cormier-Michel, Estelle; Geddes, Cameron; Gu, Junmin; Hagen, Hans; Hamann, Bernd; Koegler, Wendy; Lauret, Jerome; Meredith, Jeremy; Messmer, Peter; Otoo, Ekow; Perevoztchikov, Victor; Poskanzer, Arthur; Prabhat,; Rubel, Oliver; Shoshani, Arie; Sim, Alexander; Stockinger, Kurt; Weber, Gunther; Zhang, Wei-Ming

    2009-06-23

    As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.

  6. Criteria for core sampling bit temperature monitor

    International Nuclear Information System (INIS)

    Francis, P.M.

    1994-08-01

    A temperature monitoring device needs to be developed for the tank core sampling trucks. It will provide an additional indication of safe drill bit temperatures and give the operator a better feel for the effects of changing drill settings. This document defines the criteria for the bit monitoring system, including performance requirements, information on the core sampling system, and other conditions that may be encountered

  7. Numerical optimization of writer geometries for bit patterned magnetic recording

    Science.gov (United States)

    Kovacs, A.; Oezelt, H.; Bance, S.; Fischbacher, J.; Gusenbauer, M.; Reichel, F.; Exl, L.; Schrefl, T.; Schabes, M. E.

    2014-05-01

    A fully-automated pole-tip shape optimization tool, involving write head geometry construction, meshing, micromagnetic simulation, and evaluation, is presented. Optimizations have been performed for three different writing schemes (centered, staggered, and shingled) for an underlying bit patterned media with an areal density of 2.12 Tdots/in.2. Optimizations were performed for a single-phase media with 10 nm thickness and a mag spacing of 8 nm. From the computed write field and its gradient and the minimum energy barrier during writing for islands on the adjacent track, the overall write error rate is computed. The overall write errors are 0.7, 0.08, and 2.8×10-5 for centered writing, staggered writing, and shingled writing.

  8. 0011-0030.IEEE 754: 64 Bit Double Precision FloatsThis.pdf | 01 ...

    Indian Academy of Sciences (India)

    Home; public; Volumes; reso; 021; 01; 0011-0030.IEEE 754: 64 Bit Double Precision FloatsThis.pdf. 404! error. The page your are looking for can not be found! Please check the link or use the navigation bar at the top. YouTube; Twitter; Facebook; Blog. Academy News. IAS Logo. 29th Mid-year meeting. Posted on 19 ...

  9. Bit-Grooming: Shave Your Bits with Razor-sharp Precision

    Science.gov (United States)

    Zender, C. S.; Silver, J.

    2017-12-01

    Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.

  10. Inter-track interference mitigation with two-dimensional variable equalizer for bit patterned media recording

    Directory of Open Access Journals (Sweden)

    Yao Wang

    2017-05-01

    Full Text Available The increased track density in bit patterned media recording (BPMR causes increased inter-track interference (ITI, which degrades the bit error rate (BER performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR detector and decoded with low-density parity-check (LDPC decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER compared to that with the 2D fixed equalizer.

  11. A Novel Rate Control Scheme for Constant Bit Rate Video Streaming

    Directory of Open Access Journals (Sweden)

    Venkata Phani Kumar M

    2015-08-01

    Full Text Available In this paper, a novel rate control mechanism is proposed for constant bit rate video streaming. The initial quantization parameter used for encoding a video sequence is determined using the average spatio-temporal complexity of the sequence, its resolution and the target bit rate. Simple linear estimation models are then used to predict the number of bits that would be necessary to encode a frame for a given complexity and quantization parameter. The experimental results demonstrate that our proposed rate control mechanism significantly outperforms the existing rate control scheme in the Joint Model (JM reference software in terms of Peak Signal to Noise Ratio (PSNR and consistent perceptual visual quality while achieving the target bit rate. Furthermore, the proposed scheme is validated through implementation on a miniature test-bed.

  12. Stochastic p-Bits for Invertible Logic

    Directory of Open Access Journals (Sweden)

    Kerem Yunus Camsari

    2017-07-01

    Full Text Available Conventional semiconductor-based logic and nanomagnet-based memory devices are built out of stable, deterministic units such as standard metal-oxide semiconductor transistors, or nanomagnets with energy barriers in excess of ≈40–60  kT. In this paper, we show that unstable, stochastic units, which we call “p-bits,” can be interconnected to create robust correlations that implement precise Boolean functions with impressive accuracy, comparable to standard digital circuits. At the same time, they are invertible, a unique property that is absent in standard digital circuits. When operated in the direct mode, the input is clamped, and the network provides the correct output. In the inverted mode, the output is clamped, and the network fluctuates among all possible inputs that are consistent with that output. First, we present a detailed implementation of an invertible gate to bring out the key role of a single three-terminal transistorlike building block to enable the construction of correlated p-bit networks. The results for this specific, CMOS-assisted nanomagnet-based hardware implementation agree well with those from a universal model for p-bits, showing that p-bits need not be magnet based: any three-terminal tunable random bit generator should be suitable. We present a general algorithm for designing a Boltzmann machine (BM with a symmetric connection matrix [J] (J_{ij}=J_{ji} that implements a given truth table with p-bits. The [J] matrices are relatively sparse with a few unique weights for convenient hardware implementation. We then show how BM full adders can be interconnected in a partially directed manner (J_{ij}≠J_{ji} to implement large logic operations such as 32-bit binary addition. Hundreds of stochastic p-bits get precisely correlated such that the correct answer out of 2^{33} (≈8×10^{9} possibilities can be extracted by looking at the statistical mode or majority vote of a number of time samples. With perfect

  13. Stochastic p -Bits for Invertible Logic

    Science.gov (United States)

    Camsari, Kerem Yunus; Faria, Rafatul; Sutton, Brian M.; Datta, Supriyo

    2017-07-01

    Conventional semiconductor-based logic and nanomagnet-based memory devices are built out of stable, deterministic units such as standard metal-oxide semiconductor transistors, or nanomagnets with energy barriers in excess of ≈40 - 60 kT . In this paper, we show that unstable, stochastic units, which we call "p -bits," can be interconnected to create robust correlations that implement precise Boolean functions with impressive accuracy, comparable to standard digital circuits. At the same time, they are invertible, a unique property that is absent in standard digital circuits. When operated in the direct mode, the input is clamped, and the network provides the correct output. In the inverted mode, the output is clamped, and the network fluctuates among all possible inputs that are consistent with that output. First, we present a detailed implementation of an invertible gate to bring out the key role of a single three-terminal transistorlike building block to enable the construction of correlated p -bit networks. The results for this specific, CMOS-assisted nanomagnet-based hardware implementation agree well with those from a universal model for p -bits, showing that p -bits need not be magnet based: any three-terminal tunable random bit generator should be suitable. We present a general algorithm for designing a Boltzmann machine (BM) with a symmetric connection matrix [J ] (Ji j=Jj i) that implements a given truth table with p -bits. The [J ] matrices are relatively sparse with a few unique weights for convenient hardware implementation. We then show how BM full adders can be interconnected in a partially directed manner (Ji j≠Jj i) to implement large logic operations such as 32-bit binary addition. Hundreds of stochastic p -bits get precisely correlated such that the correct answer out of 233 (≈8 ×1 09) possibilities can be extracted by looking at the statistical mode or majority vote of a number of time samples. With perfect directivity (Jj i=0 ) a small

  14. Using magnetic permeability bits to store information

    Science.gov (United States)

    Timmerwilke, John; Petrie, J. R.; Wieland, K. A.; Mencia, Raymond; Liou, Sy-Hwang; Cress, C. D.; Newburgh, G. A.; Edelstein, A. S.

    2015-10-01

    Steps are described in the development of a new magnetic memory technology, based on states with different magnetic permeability, with the capability to reliably store large amounts of information in a high-density form for decades. The advantages of using the permeability to store information include an insensitivity to accidental exposure to magnetic fields or temperature changes, both of which are known to corrupt memory approaches that rely on remanent magnetization. The high permeability media investigated consists of either films of Metglas 2826 MB (Fe40Ni38Mo4B18) or bilayers of permalloy (Ni78Fe22)/Cu. Regions of films of the high permeability media were converted thermally to low permeability regions by laser or ohmic heating. The permeability of the bits was read by detecting changes of an external 32 Oe probe field using a magnetic tunnel junction 10 μm away from the media. Metglas bits were written with 100 μs laser pulses and arrays of 300 nm diameter bits were read. The high and low permeability bits written using bilayers of permalloy/Cu are not affected by 10 Mrad(Si) of gamma radiation from a 60Co source. An economical route for writing and reading bits as small at 20 nm using a variation of heat assisted magnetic recording is discussed.

  15. The best bits in an iris code.

    Science.gov (United States)

    Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J

    2009-06-01

    Iris biometric systems apply filters to iris images to extract information about iris texture. Daugman's approach maps the filter output to a binary iris code. The fractional Hamming distance between two iris codes is computed and decisions about the identity of a person are based on the computed distance. The fractional Hamming distance weights all bits in an iris code equally. However, not all the bits in an iris code are equally useful. Our research is the first to present experiments documenting that some bits are more consistent than others. Different regions of the iris are compared to evaluate their relative consistency, and contrary to some previous research, we find that the middle bands of the iris are more consistent than the inner bands. The inconsistent-bit phenomenon is evident across genders and different filter types. Possible causes of inconsistencies, such as segmentation, alignment issues, and different filters are investigated. The inconsistencies are largely due to the coarse quantization of the phase response. Masking iris code bits corresponding to complex filter responses near the axes of the complex plane improves the separation between the match and nonmatch Hamming distance distributions.

  16. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  17. Bits extraction for palmprint template protection with Gabor magnitude and multi-bit quantization

    NARCIS (Netherlands)

    Mu, Meiru; Shao, X.; Ruan, Qiuqi; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2013-01-01

    In this paper, we propose a method of fixed-length binary string extraction (denoted by LogGM_DROBA) from low-resolution palmprint image for developing palmprint template protection technology. In order to extract reliable (stable and discriminative) bits, multi-bit equal-probability-interval

  18. Managing the Number of Tag Bits Transmitted in a Bit-Tracking RFID Collision Resolution Protocol

    Directory of Open Access Journals (Sweden)

    Hugo Landaluce

    2014-01-01

    Full Text Available Radio Frequency Identification (RFID technology faces the problem of message collisions. The coexistence of tags sharing the communication channel degrades bandwidth, and increases the number of bits transmitted. The window methodology, which controls the number of bits transmitted by the tags, is applied to the collision tree (CT protocol to solve the tag collision problem. The combination of this methodology with the bit-tracking technology, used in CT, improves the performance of the window and produces a new protocol which decreases the number of bits transmitted. The aim of this paper is to show how the CT bit-tracking protocol is influenced by the proposed window, and how the performance of the novel protocol improves under different conditions of the scenario. Therefore, we have performed a fair comparison of the CT protocol, which uses bit-tracking to identify the first collided bit, and the new proposed protocol with the window methodology. Simulations results show that the proposed window positively decreases the total number of bits that are transmitted by the tags, and outperforms the CT protocol latency in slow tag data rate scenarios.

  19. Managing the number of tag bits transmitted in a bit-tracking RFID collision resolution protocol.

    Science.gov (United States)

    Landaluce, Hugo; Perallos, Asier; Angulo, Ignacio

    2014-01-08

    Radio Frequency Identification (RFID) technology faces the problem of message collisions. The coexistence of tags sharing the communication channel degrades bandwidth, and increases the number of bits transmitted. The window methodology, which controls the number of bits transmitted by the tags, is applied to the collision tree (CT) protocol to solve the tag collision problem. The combination of this methodology with the bit-tracking technology, used in CT, improves the performance of the window and produces a new protocol which decreases the number of bits transmitted. The aim of this paper is to show how the CT bit-tracking protocol is influenced by the proposed window, and how the performance of the novel protocol improves under different conditions of the scenario. Therefore, we have performed a fair comparison of the CT protocol, which uses bit-tracking to identify the first collided bit, and the new proposed protocol with the window methodology. Simulations results show that the proposed window positively decreases the total number of bits that are transmitted by the tags, and outperforms the CT protocol latency in slow tag data rate scenarios.

  20. On the BER and capacity analysis of MIMO MRC systems with channel estimation error

    KAUST Repository

    Yang, Liang

    2011-10-01

    In this paper, we investigate the effect of channel estimation error on the capacity and bit-error rate (BER) of a multiple-input multiple-output (MIMO) transmit maximal ratio transmission (MRT) and receive maximal ratio combining (MRC) systems over uncorrelated Rayleigh fading channels. We first derive the ergodic (average) capacity expressions for such systems when power adaptation is applied at the transmitter. The exact capacity expression for the uniform power allocation case is also presented. Furthermore, to investigate the diversity order of MIMO MRT-MRC scheme, we derive the BER performance under a uniform power allocation policy. We also present an asymptotic BER performance analysis for the MIMO MRT-MRC system with multiuser diversity. The numerical results are given to illustrate the sensitivity of the main performance to the channel estimation error and the tightness of the approximate cutoff value. © 2011 IEEE.

  1. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes.

    Directory of Open Access Journals (Sweden)

    Amr M Elhelw

    Full Text Available High peak to average power ratio (PAPR is one of the major problems of OFDM systems. Selected mapping (SLM is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate.

  2. Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes

    Science.gov (United States)

    Elhelw, Amr M.; Badran, Ehab F.

    2015-01-01

    High peak to average power ratio (PAPR) is one of the major problems of OFDM systems. Selected mapping (SLM) is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI) index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI) for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate. PMID:26018504

  3. Ultra low bit-rate speech coding

    CERN Document Server

    Ramasubramanian, V

    2015-01-01

    "Ultra Low Bit-Rate Speech Coding" focuses on the specialized topic of speech coding at very low bit-rates of 1 Kbits/sec and less, particularly at the lower ends of this range, down to 100 bps. The authors set forth the fundamental results and trends that form the basis for such ultra low bit-rates to be viable and provide a comprehensive overview of various techniques and systems in literature to date, with particular attention to their work in the paradigm of unit-selection based segment quantization. The book is for research students, academic faculty and researchers, and industry practitioners in the areas of speech processing and speech coding.

  4. Color characters for white hot string bits

    Science.gov (United States)

    Curtright, Thomas L.; Raha, Sourav; Thorn, Charles B.

    2017-10-01

    The state space of a generic string bit model is spanned by N ×N matrix creation operators acting on a vacuum state. Such creation operators transform in the adjoint representation of the color group U (N ) [or S U (N ) if the matrices are traceless]. We consider a system of b species of bosonic bits and f species of fermionic bits. The string, emerging in the N →∞ limit, identifies P+=m M √{2 } where M is the bit number operator and P-=H √{2 } where H is the system Hamiltonian. We study the thermal properties of this string bit system in the case H =0 , which can be considered the tensionless string limit: the only dynamics is restricting physical states to color singlets. Then the thermal partition function Tr e-β m M can be identified, putting x =e-β m, with a generating function χ0b f(x ), for which the coefficient of xn in its expansion about x =0 is the number of color singlets with bit number M =n . This function is a purely group theoretic object, which is well studied in the literature. We show that at N =∞ this system displays a Hagedorn divergence at x =1 /(b +f ) with ultimate temperature TH=m /ln (b +f ). The corresponding function for finite N is perfectly finite for 0

  5. Introduction to bit slices and microprogramming

    International Nuclear Information System (INIS)

    Van Dam, A.

    1981-01-01

    Bit-slice logic blocks are fourth-generation LSI components which are natural extensions of traditional mulitplexers, registers, decoders, counters, ALUs, etc. Their functionality is controlled by microprogramming, typically to implement CPUs and peripheral controllers where both speed and easy programmability are required for flexibility, ease of implementation and debugging, etc. Processors built from bit-slice logic give the designer an alternative for approaching the programmibility of traditional fixed-instruction-set microprocessors with a speed closer to that of hardwired random logic. (orig.)

  6. Giga-bit optical data transmission module for Beam Instrumentation

    CERN Document Server

    Roedne, L T; Cenkeramaddi, L R; Jiao, L

    Particle accelerators require electronic instrumentation for diagnostic, assessment and monitoring during operation of the transferring and circulating beams. A sensor located near the beam provides an electrical signal related to the observable quantity of interest. The front-end electronics provides analog-to-digital conversion of the quantity being observed and the generated data are to be transferred to the external digital back-end for data processing, and to display to the operators and logging. This research project investigates the feasibility of radiation-tolerant giga-bit data transmission over optic fibre for beam instrumentation applications, starting from the assessment of the state of the art technology, identification of challenges and proposal of a system level solution, which should be validated with a PCB design in an experimental setup. Radiation tolerance of 10 kGy (Si) Total Ionizing Dose (TID) over 10 years of operation, Bit Error Rate (BER) 10-6 or better. The findings and results of th...

  7. Robust Face Recognition using Voting by Bit-plane Images based on Sparse Representation

    Directory of Open Access Journals (Sweden)

    Dongmei Wei

    2015-08-01

    Full Text Available Plurality voting is widely employed as combination strategies in pattern recognition. As a technology proposed recently, sparse representation based classification codes the query image as a sparse linear combination of entire training images and classifies the query sample class by class exploiting the class representation error. In this paper, an improvement face recognition approach using sparse representation and plurality voting based on the binary bit-plane images is proposed. After being equalized, gray images are decomposed into eight bit-plane images, sparse representation based classification is exploited respectively on the five bit-plane images that have more discrimination information. Finally, the true identity of query image is voted by these five identities obtained. Experiment results shown that this proposed approach is preferable both in recognition accuracy and in recognition speed.

  8. Worst-case residual clipping noise power model for bit loading in LACO-OFDM

    KAUST Repository

    Zhang, Zhenyu

    2018-03-19

    Layered ACO-OFDM enjoys better spectral efficiency than ACO-OFDM, but its performance is challenged by residual clipping noise (RCN). In this paper, the power of RCN of LACO-OFDM is analyzed and modeled. As RCN is data-dependent, the worst-case situation is considered. A worst-case indicator is defined for relating the power of RCN and the power of noise at the receiver, wherein a linear relation is shown to be a practical approximation. An LACO-OFDM bit-loading experiment is performed to examine the proposed RCN power model for data rates of 6 to 7 Gbps. The experiment\\'s results show that accounting for RCN has two advantages. First, it leads to better bit loading and achieves up to 59% lower overall bit-error rate (BER) than when the RCN is ignored. Second, it balances the BER across layers, which is a desired property from a channel coding perspective.

  9. Linear, Constant-rounds Bit-decomposition

    DEFF Research Database (Denmark)

    Reistad, Tord; Toft, Tomas

    2010-01-01

    When performing secure multiparty computation, tasks may often be simple or difficult depending on the representation chosen. Hence, being able to switch representation efficiently may allow more efficient protocols. We present a new protocol for bit-decomposition: converting a ring element x ∈ ℤ M...

  10. Entropy of a bit-shift channel

    NARCIS (Netherlands)

    Baggen, Stan; Balakirsky, Vladimir; Denteneer, Dee; Egner, Sebastian; Hollmann, Henk; Tolhuizen, Ludo; Verbitskiy, Evgeny

    2006-01-01

    We consider a simple transformation (coding) of an iid source called a bit-shift channel. This simple transformation occurs naturally in magnetic or optical data storage. The resulting process is not Markov of any order. We discuss methods of computing the entropy of the transformed process, and

  11. Hey! A Black Widow Spider Bit Me!

    Science.gov (United States)

    ... as soon as you can because they can make you very sick. With an adult's help, wash the bite well with soap and water. Then apply an ice pack to the bite, and try to elevate the area and keep it still to help prevent the ... black widows, you'll want to make sure that's the kind of spider that bit ...

  12. Effect of video decoder errors on video interpretability

    Science.gov (United States)

    Young, Darrell L.

    2014-06-01

    The advancement in video compression technology can result in more sensitivity to bit errors. Bit errors can propagate causing sustained loss of interpretability. In the worst case, the decoder "freezes" until it can re-synchronize with the stream. Detection of artifacts enables downstream processes to avoid corrupted frames. A simple template approach to detect block stripes and a more advanced cascade approach to detect compression artifacts was shown to correlate to the presence of artifacts and decoder messages.

  13. Coded performance of a fast frequency-hopped noncoherent BFSK ratio statistic receiver over a Rician fading channel with partial-band interference

    OpenAIRE

    Betancourt R., Miguel A.

    1992-01-01

    Approved for public release; distribution is unlimited A frequency-hopping binary frequency shift keying (BSFK) ratio-statistic receiver with multihops per data bit is an effective electronic counter-countermeasures (ECCM) system against partial-band jamming interference. Interference is modeled as Gaussian noise. Orthogonal binary signaling and independent fading diversity is considered over frequency-nonselective, slow fading Rayleigh, Rician, and Gaussian channels. A forward error co...

  14. Corrected multiple upsets and bit reversals for improved 1-s resolution measurements

    International Nuclear Information System (INIS)

    Brucker, G.J.; Stassinopoulos, E.G.; Stauffer, C.A.

    1994-01-01

    Previous work has studied the generation of single and multiple errors in control and irradiated static RAM samples (Harris 6504RH) which were exposed to heavy ions for relatively long intervals of time (minute), and read out only after the beam was shut off. The present investigation involved storing 4k x 1 bit maps every second during 1 min ion exposures at low flux rates of 10 3 ions/cm 2 -s in order to reduce the chance of two sequential ions upsetting adjacent bits. The data were analyzed for the presence of adjacent upset bit locations in the physical memory plane, which were previously defined to constitute multiple upsets. Improvement in the time resolution of these measurements has provided more accurate estimates of multiple upsets. The results indicate that the percentage of multiples decreased from a high of 17% in the previous experiment to less than 1% for this new experimental technique. Consecutive double and triple upsets (reversals of bits) were detected. These were caused by sequential ions hitting the same bit, with one or two reversals of state occurring in a 1-min run. In addition to these results, a status review for these same parts covering 3.5 years of imprint damage recovery is also presented

  15. Two research contributions in 64-bit computing: Testing and Applications

    OpenAIRE

    Chang, Victor

    2005-01-01

    Following the release of Windows 64-bit and Redhat Linux 64-bit operating systems (OS) in late April 2005, this is the one of the first 64-bit OS research project completed in a British university. The objective is to investigate (1) the increase/decrease in performance compared to 32-bit computing; (2) the techniques used to develop 64-bit applications; and (3) how 64-bit computing should be used in IT and research organizations to improve their work. This paper summarizes research discoveri...

  16. Bounds on Minimum Energy per Bit for Optical Wireless Relay Channels

    Directory of Open Access Journals (Sweden)

    A. D. Raza

    2014-09-01

    Full Text Available An optical wireless relay channel (OWRC is the classical three node network consisting of source, re- lay and destination nodes with optical wireless connectivity. The channel law is assumed Gaussian. This paper studies the bounds on minimum energy per bit required for reliable communication over an OWRC. It is shown that capacity of an OWRC is concave and energy per bit is monotonically increasing in square of the peak optical signal power, and consequently the minimum energy per bit is inversely pro- portional to the square root of asymptotic capacity at low signal to noise ratio. This has been used to develop upper and lower bound on energy per bit as a function of peak signal power, mean to peak power ratio, and variance of channel noise. The upper and lower bounds on minimum energy per bit derived in this paper correspond respectively to the decode and forward lower bound and the min-max cut upper bound on OWRC capacity

  17. Influence of transmission bit rate on performance of optical fibre communication systems with direct modulation of laser diodes

    International Nuclear Information System (INIS)

    Ahmed, Moustafa F

    2009-01-01

    This paper reports on the influence of the transmission bit rate on the performance of optical fibre communication systems employing laser diodes subjected to high-speed direct modulation. The performance is evaluated in terms of the bit error rate (BER) and power penalty associated with increasing the transmission bit rate while keeping the transmission distance. The study is based on numerical analysis of the stochastic rate equations of the laser diode and takes into account noise mechanisms in the receiver. Correlation between BER and the Q-parameter of the received signal is presented. The relative contributions of the transmitter noise and the circuit and shot noises of the receiver to BER are quantified as functions of the transmission bit rate. The results show that the power penalty at BER = 10 -9 required to keep the transmission distance increases moderately with the increase in the bit rate near 1 Gbps and at high bias currents. In this regime, the shot noise is the main contributor to BER. At higher bit rates and lower bias currents, the power penalty increases remarkably, which comes mainly from laser noise induced by the pseudorandom bit-pattern effect.

  18. Single Bit Radar Systems for Digital Integration

    OpenAIRE

    Bjørndal, Øystein

    2017-01-01

    Small, low cost, radar systems have exciting applications in monitoring and imaging for the industrial, healthcare and Internet of Things (IoT) sectors. We here explore, and show the feasibility of, several single bit square wave radar architectures; that benefits from the continuous improvement in digital technologies for system-on-chip digital integration. By analysis, simulation and measurements we explore novel and harmonic-rich continuous wave (CW), stepped-frequency CW (SFCW) and freque...

  19. A Novel Least Significant Bit First Processing Parallel CRC Circuit

    OpenAIRE

    Xiujie Qu; Zhongkai Cao; Zhanjie Yang

    2013-01-01

    In HDLC serial communication protocol, CRC calculation can first process the most or least significant bit of data. Nowadays most CRC calculation is based on the most significant bit (MSB) first processing. An algorithm of the least significant bit (LSB) first processing parallel CRC is proposed in this paper. Based on the general expression of the least significant bit first processing serial CRC, using state equation method of linear system, we derive a recursive formula by the mathematical...

  20. Method to manufacture bit patterned magnetic recording media

    Science.gov (United States)

    Raeymaekers, Bart; Sinha, Dipen N

    2014-05-13

    A method to increase the storage density on magnetic recording media by physically separating the individual bits from each other with a non-magnetic medium (so-called bit patterned media). This allows the bits to be closely packed together without creating magnetic "cross-talk" between adjacent bits. In one embodiment, ferromagnetic particles are submerged in a resin solution, contained in a reservoir. The bottom of the reservoir is made of piezoelectric material.

  1. Development of an RSFQ 4-bit ALU

    International Nuclear Information System (INIS)

    Kim, J. Y.; Baek, S. H.; Kim, S. H.; Kang, K. R.; Jung, K. R.; Lim, H. Y.; Park, J. H.; Han, T. S.

    2005-01-01

    We have developed and tested an RSFQ 4-bit Arithmetic Logic Unit (ALU) based on half adder cells and de switches. ALU is a core element of a computer processor that performs arithmetic and logic operations on the operands in computer instruction words. The designed ALU had limited operation functions of OR, AND, XOR, and ADD. It had a pipeline structure. We have simulated the circuit by using Josephson circuit simulation tools in order to reduce the timing problem, and confirmed the correct operation of the designed ALU. We used simulation tools of XIC TM ,WRspice TM , and Julia. The fabricated 4-bit ALU circuit had a size of 3000 calum X 1500, and the chip size was 5 mm X 5 mm. The test speeds were 1000 kHz and 5 GHz. For high-speed test, we used an eye-diagram technique. Our 4-bit ALU operated correctly up to 5 GHz clock frequency. The chip was tested at the liquid-helium temperature.

  2. Polarization-basis tracking scheme for quantum key distribution using revealed sifted key bits

    Science.gov (United States)

    Ding, Yu-Yang; Chen, Wei; Chen, Hua; Wang, Chao; li, Ya-Ping; Wang, Shuang; Yin, Zhen-Qiang; Guo, Guang-Can; Han, Zheng-Fu

    2017-03-01

    Calibration of the polarization basis between the transmitter and receiver is an important task in quantum key distribution (QKD). An effective polarization-basis tracking scheme will decrease the quantum bit error rate (QBER) and improve the efficiency of a polarization encoding QKD system. In this paper, we proposed a polarization-basis tracking scheme using only unveiled sifted key bits while performing error correction by legitimate users, rather than introducing additional reference light or interrupting the transmission of quantum signals. A polarization-encoding fiber BB84 QKD prototype was developed to examine the validity of this scheme. An average QBER of 2.32% and a standard derivation of 0.87% have been obtained during 24 hours of continuous operation.

  3. Proposed first-generation WSQ bit allocation procedure

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-09-08

    The Wavelet/Scalar Quantization (WSQ) gray-scale fingerprint image compression algorithm involves a symmetric wavelet transform (SWT) image decomposition followed by uniform scalar quantization of each subband. The algorithm is adaptive insofar as the bin widths for the scalar quantizers are image-specific and are included in the compressed image format. Since the decoder requires only the actual bin width values -- but not the method by which they were computed -- the standard allows for future refinements of the WSQ algorithm by improving the method used to select the scalar quantizer bin widths. This report proposes a bit allocation procedure for use with the first-generation WSQ encoder. In previous work a specific formula is provided for the relative sizes of the scalar quantizer bin widths in terms of the variances of the SWT subbands. An explicit specification for the constant of proportionality, q, that determines the absolute bin widths was not given. The actual compression ratio produced by the WSQ algorithm will generally vary from image to image depending on the amount of coding gain obtained by the run-length and Huffman coding, stages of the algorithm, but testing performed by the FBI established that WSQ compression produces archival quality images at compression ratios of around 20 to 1. The bit allocation procedure described in this report possesses a control parameter, r, that can be set by the user to achieve a predetermined amount of lossy compression, effectively giving the user control over the amount of distortion introduced by quantization noise. The variability observed in final compression ratios is thus due only to differences in lossless coding gain from image to image, chiefly a result of the varying amounts of blank background surrounding the print area in the images. Experimental results are presented that demonstrate the proposed method`s effectiveness.

  4. Bit selection using field drilling data and mathematical investigation

    Science.gov (United States)

    Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.

    2018-03-01

    A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.

  5. Study of the laws governing wear of cutter bits

    Energy Technology Data Exchange (ETDEWEB)

    Potrovka, S.

    1979-01-01

    A study was made of the laws governing the change in drilling of a bit in the process of ramming depending on the wear of the cutter bit. Experiments were conducted on the drilling stand ZIF-1200A by 3-cutter bits V-140T with cemented fittings and surfacing of the rear part of the external cutter bit crowns. Experimental data are presented from studying the laws governing the change in the current drilling of the bit and the corresponding wear depending on the total number of bit rotations during drilling of gray granite. Dependences are also indicated for drilling on the bit of the current mechanical drilling velocity and the mechanical drilling velocity during one rotation on the total number of bit rotations, as well as the mechanical drilling velocity on drilling per bit during drilling of gray granite. It was established that the efficient time for stay of the bit on the face both with minimum cost of 1 m of drilling, and with maximum per-trip velocity depends on the parameters of the drilling regime, the strength of the rocks, the depth of drilling and the standard indicators for the cost of rolling the equipment in 1 min, and the cost of the drill bit. Experimental data were obtained which make it possible to rapidly determine the efficient time for lifting the bit and to use for this purpose simple resources of computers.

  6. Autonomously stabilized entanglement between two superconducting quantum bits

    Science.gov (United States)

    Shankar, S.; Hatridge, M.; Leghtas, Z.; Sliwa, K. M.; Narla, A.; Vool, U.; Girvin, S. M.; Frunzio, L.; Mirrahimi, M.; Devoret, M. H.

    2013-12-01

    Quantum error correction codes are designed to protect an arbitrary state of a multi-qubit register from decoherence-induced errors, but their implementation is an outstanding challenge in the development of large-scale quantum computers. The first step is to stabilize a non-equilibrium state of a simple quantum system, such as a quantum bit (qubit) or a cavity mode, in the presence of decoherence. This has recently been accomplished using measurement-based feedback schemes. The next step is to prepare and stabilize a state of a composite system. Here we demonstrate the stabilization of an entangled Bell state of a quantum register of two superconducting qubits for an arbitrary time. Our result is achieved using an autonomous feedback scheme that combines continuous drives along with a specifically engineered coupling between the two-qubit register and a dissipative reservoir. Similar autonomous feedback techniques have been used for qubit reset, single-qubit state stabilization, and the creation and stabilization of states of multipartite quantum systems. Unlike conventional, measurement-based schemes, the autonomous approach uses engineered dissipation to counteract decoherence, obviating the need for a complicated external feedback loop to correct errors. Instead, the feedback loop is built into the Hamiltonian such that the steady state of the system in the presence of drives and dissipation is a Bell state, an essential building block for quantum information processing. Such autonomous schemes, which are broadly applicable to a variety of physical systems, as demonstrated by the accompanying paper on trapped ion qubits, will be an essential tool for the implementation of quantum error correction.

  7. Widely tunable wavelength conversion with extinction ratio enhancement using PCF-based NOLM

    DEFF Research Database (Denmark)

    Kwok, C.H.; Lee, S.H.; Chow, K.K.

    2005-01-01

    A widely tunable wavelength conversion scheme has been demonstrated using a 64-m-long dispersion-flattened high-nonlinearity photonic crystal fiber in a nonlinear optical loop mirror. Wavelength conversion range of over 60 nm with a 10-Gb/s return-to-zero signal was obtained with the output extin...... extinction ratio (ER) maintained above 13 dB. The proposed scheme can also improve the output ER and remove the bit-error-rate floor if a degraded signal is used....

  8. Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMS

    International Nuclear Information System (INIS)

    Diehl, S.E.; Ochoa, A. Jr.; Dressendorfer, P.V.; Koga, R.; Kolasinski, W.A.

    1982-06-01

    Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors

  9. A 12-bit 1-MS/s 26-μW SAR ADC for Sensor Applications

    Science.gov (United States)

    Chung, Yung-Hui; Yen, Chia-Wei; Tsai, Cheng-Hsun

    2018-01-01

    This chapter presents an energy-efficient 12-bit 1-MS/s successive approximation register analog-to-digital converter (ADC) for sensor applications. A programmable dynamic comparator is proposed to suppress static current and maintain good linearity. A hybrid charge redistribution digital-to-analog converter is proposed to decrease the total capacitance, which would reduce the power consumption of the input and reference buffers. In the proposed ADC, its total input capacitance is only 700 fF, which greatly reduces the total power consumption of the analog frontend circuits. The 12-bit ADC is fabricated using 0.18-μm complementary metal-oxidesemiconductor technology, and it consumes only 26 μW from a 1 V supply at 1-MS/s. The measured signal-to-noise-and-distortion ratio (SNDR) and spurious-free dynamic range (SFDR) are 60.1 and 72.6 dB, respectively. The measured effective number of bits (ENOB) for a 100 kHz input frequency is 9.7 bits. At the Nyquist input frequency, the measured SNDR and SFDR are 59.7 and 71 dB, respectively. The ENOB is maintained at 9.6 bits and the figure-of-merit is 33.5 fJ/conversion-step.

  10. A novel bit-quad-based Euler number computing algorithm.

    Science.gov (United States)

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.

  11. Medication Errors

    Science.gov (United States)

    ... for You Agency for Healthcare Research and Quality: Medical Errors and Patient Safety Centers for Disease Control and ... Quality Chasm Series National Coordinating Council for Medication Error Reporting and Prevention ... Devices Radiation-Emitting Products Vaccines, Blood & Biologics Animal & ...

  12. Bit-string physics: A novel theory of everything

    Energy Technology Data Exchange (ETDEWEB)

    Noyes, H.P.

    1994-08-01

    We encode the quantum numbers of the standard model of quarks and leptons using constructed bitstrings of length 256. These label a grouting universe of bit-strings of growing length that eventually construct a finite and discrete space-time with reasonable cosmological properties. Coupling constants and mass ratios, computed from closure under XOR and a statistical hypothesis, using only {h_bar}, c and m{sub p} to fix our units of mass, length and time in terms of standard (meterkilogram-second) metrology, agree with the first four to seven significant figures of accepted experimental results. Finite and discrete conservation laws and commutation relations insure the essential characteristics of relativistic quantum mechanics, including particle-antiparticle pair creation. The correspondence limit in (free space) Maxwell electromagnetism and Einstein gravitation is consistent with the Feynman-Dyson-Tanimura ``proof.``

  13. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    Science.gov (United States)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  14. Entangled solitons and stochastic Q-bits

    International Nuclear Information System (INIS)

    Rybakov, Yu.P.; Kamalov, T.F.

    2007-01-01

    Stochastic realization of the wave function in quantum mechanics with the inclusion of soliton representation of extended particles is discussed. Two-solitons configurations are used for constructing entangled states in generalized quantum mechanics dealing with extended particles, endowed with nontrivial spin S. Entangled solitons construction being introduced in the nonlinear spinor field model, the Einstein-Podolsky-Rosen (EPR) correlation is calculated and shown to coincide with the quantum mechanical one for the 1/2-spin particles. The concept of stochastic q-bits is used for quantum computing modelling

  15. Investigation of PDC bit failure base on stick-slip vibration analysis of drilling string system plus drill bit

    Science.gov (United States)

    Huang, Zhiqiang; Xie, Dou; Xie, Bing; Zhang, Wenlin; Zhang, Fuxiao; He, Lei

    2018-03-01

    The undesired stick-slip vibration is the main source of PDC bit failure, such as tooth fracture and tooth loss. So, the study of PDC bit failure base on stick-slip vibration analysis is crucial to prolonging the service life of PDC bit and improving ROP (rate of penetration). For this purpose, a piecewise-smooth torsional model with 4-DOF (degree of freedom) of drilling string system plus PDC bit is proposed to simulate non-impact drilling. In this model, both the friction and cutting behaviors of PDC bit are innovatively introduced. The results reveal that PDC bit is easier to fail than other drilling tools due to the severer stick-slip vibration. Moreover, reducing WOB (weight on bit) and improving driving torque can effectively mitigate the stick-slip vibration of PDC bit. Therefore, PDC bit failure can be alleviated by optimizing drilling parameters. In addition, a new 4-DOF torsional model is established to simulate torsional impact drilling and the effect of torsional impact on PDC bit's stick-slip vibration is analyzed by use of an engineering example. It can be concluded that torsional impact can mitigate stick-slip vibration, prolonging the service life of PDC bit and improving drilling efficiency, which is consistent with the field experiment results.

  16. Object tracking based on bit-planes

    Science.gov (United States)

    Li, Na; Zhao, Xiangmo; Liu, Ying; Li, Daxiang; Wu, Shiqian; Zhao, Feng

    2016-01-01

    Visual object tracking is one of the most important components in computer vision. The main challenge for robust tracking is to handle illumination change, appearance modification, occlusion, motion blur, and pose variation. But in surveillance videos, factors such as low resolution, high levels of noise, and uneven illumination further increase the difficulty of tracking. To tackle this problem, an object tracking algorithm based on bit-planes is proposed. First, intensity and local binary pattern features represented by bit-planes are used to build two appearance models, respectively. Second, in the neighborhood of the estimated object location, a region that is most similar to the models is detected as the tracked object in the current frame. In the last step, the appearance models are updated with new tracking results in order to deal with environmental and object changes. Experimental results on several challenging video sequences demonstrate the superior performance of our tracker compared with six state-of-the-art tracking algorithms. Additionally, our tracker is more robust to low resolution, uneven illumination, and noisy video sequences.

  17. Physical Roots of It from Bit

    Science.gov (United States)

    Berezin, Alexander A.

    2003-04-01

    Why there is Something rather than Nothing? From Pythagoras ("everything is number") to Wheeler ("it from bit") theme of ultimate origin stresses primordiality of Ideal Platonic World (IPW) of mathematics. Even popular "quantum tunnelling out of nothing" can specify "nothing" only as (essentially) IPW. IPW exists everywhere (but nowhere in particular) and logically precedes space, time, matter or any "physics" in any conceivable universe. This leads to propositional conjecture (axiom?) that (meta)physical "Platonic Pressure" of infinitude of numbers acts as engine for self-generation of physical universe directly out of mathematics: cosmogenesis is driven by the very fact of IPW inexhaustibility. While physics in other quantum branches of inflating universe (Megaverse)can be(arbitrary) different from ours, number theory (and rest of IPW)is not (it is unique, absolute, immutable and infinitely resourceful). Let (infinite) totality of microstates ("its") of entire Megaverse form countable set. Since countable sets are hierarchically inexhaustible (Cantor's "fractal branching"), each single "it" still has infinite tail of non-overlapping IPW-based "personal labels". Thus, each "bit" ("it") is infinitely and uniquely resourceful: possible venue of elimination ergodicity basis for eternal return cosmological argument. Physics (in any subuniverse) may be limited only by inherent impossibilities residing in IPW, e.g. insolvability of Continuum Problem may be IPW foundation of quantum indeterminicity.

  18. Low bit rates image compression via adaptive block downsampling and super resolution

    Science.gov (United States)

    Chen, Honggang; He, Xiaohai; Ma, Minglang; Qing, Linbo; Teng, Qizhi

    2016-01-01

    A low bit rates image compression framework based on adaptive block downsampling and super resolution (SR) was presented. At the encoder side, the downsampling mode and quantization mode of each 16×16 macroblock are determined adaptively using the ratio distortion optimization method, then the downsampled macroblocks are compressed by the standard JPEG. At the decoder side, the sparse representation-based SR algorithm is applied to recover full resolution macroblocks from decoded blocks. The experimental results show that the proposed framework outperforms the standard JPEG and the state-of-the-art downsampling-based compression methods in terms of both subjective and objective comparisons. Specifically, the peak signal-to-noise ratio gain of the proposed framework over JPEG reaches up to 2 to 4 dB at low bit rates, and the critical bit rate to JPEG is raised to about 2.3 bits per pixel. Moreover, the proposed framework can be extended to other block-based compression schemes.

  19. Design and Implementation of Decimation Filter for 13-bit Sigma-Delta ADC Based on FPGA

    Directory of Open Access Journals (Sweden)

    Khalid Khaleel Mohammed

    2016-10-01

    Full Text Available A 13 bit Sigma-Delta ADC for a signal band of 40K Hz is designed in MATLAB Simulink and then implemented using Xilinx system generator tool.  The first order Sigma-Delta modulator is designed to work at a signal band of 40 KHz at an oversampling ratio (OSR of 256 with a sampling frequency of 20.48 MHz. The proposed decimation filter design is consists of a second order Cascaded Integrator Comb filter (CIC followed by two finite impulse response (FIR filters. This architecture reduces the need for multiplication which is need very large area. This architecture implements a decimation ratio of 256 and allows a maximum resolution of 13  bits in the output of the filter. The decimation filter was designed  and  tested  in  Xilinx  system  generator  tool  which  reduces  the  design  cycle  by  directly generating efficient VHDL code. The results obtained show that the overall Sigma-Delta ADC is able to achieve an ENOB (Effective Number Of Bit of 13.71 bits and SNR of 84.3 dB

  20. Research on unequal error protection with punctured turbo codes in jpeg image transmission system

    Directory of Open Access Journals (Sweden)

    Lakhdar Moulay A.

    2007-01-01

    Full Text Available An investigation of Unequal Error Protection (UEP methods applied to JPEG image transmission using turbo codes is presented. The JPEG image is partitioned into two groups, i.e., DC components and AC components according to their respective sensitivity to channel noise. The highly sensitive DC components are better protected with a lower coding rate, while the less sensitive AC components use a higher coding rate. While we use the s-random interleaver and s-random odd-even interleaver combined with odd-even puncturing, we can fix easily the local rate of turbo-code. We propose to modify the design of s-random interleaver to fix the number of parity bits. A new UEP scheme for the Soft Output Viterbi Algorithm (SOVA is also proposed to improve the performances in terms of Bit Error Rate (BER and Peak Signal to Noise Ratio (PSNR. Simulation results are given to demonstrate how the UEP schemes outperforms the equal error protection (EEP scheme in terms of BER and PSNR.

  1. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  2. A fixed/variable bit-rate data compression architecture

    Science.gov (United States)

    Zweigle, Gregary C.; Venbrux, Jack; Yeh, Pen-Shu

    1993-01-01

    A VLSI architecture for an adaptive data compression encoder capable of sustaining fixed or variable bit-rate output has been developed. There are three modes of operation: lossless with variable bit-rate, lossy with fixed bit-rate and lossy with variable bit-rate. For lossless encoding, the implementation is identical to the USES chip designed for Landsat 7. Obtaining a fixed bit-rate is achieved with a lossy DPCM algorithm using adaptive, nonuniform scalar quantization. In lossy mode, variable bit-rate coding uses the lossless sections of the encoder for post-DPCM entropy coding. The encoder shows excellent compression performance in comparison to other current data compression techniques. No external tables or memory are required for operation.

  3. Cross Institutional Cooperation on a Shared Bit Repository

    DEFF Research Database (Denmark)

    Zierau, Eld; Kejser, Ulla Bøgvad

    2013-01-01

    This paper explores how independent institutions, such as archives and libraries, can cooperate on managing a shared bit repository with bit preservation, in order to use their resources for preservation in a more cost-effective way. It uses the OAIS Reference Model to provide a framework...... for systematically analysing institutions technical and organisational requirements for a remote bit repository. Instead of viewing a bit repository simply as Archival Storage for the institutions repositories, we argue for viewing it as consisting of a subset of functions from all entities defined by the OAIS...... Reference Model. The work is motivated by and used in a current Danish feasibility study for establishing a national bit repository. The study revealed that depending on their missions and the collections they hold, the institutions have varying requirements e.g. for bit safety, accessibility...

  4. Logic Operators on Delta-Sigma Bit-Streams

    Directory of Open Access Journals (Sweden)

    Axel Klein

    2018-01-01

    Full Text Available The fundamental logic operations NOT, OR, AND, and XOR processing bit-streams of Δ Σ -modulators are discussed herein. The resulting bit-streams are evaluated on the basis of their mean values and their standard deviations. Mathematical expressions are presented for their mean values; i.e., the logic function XOR results in the negative multiplication of two bipolar bit-streams, and the logic function AND results in the multiplication of two unipolar bit-streams. As the results are valid for bit-streams with independent high-frequency components, the normed cross-product is utilized for evaluation of the independence of the high-frequency components. In order to achieve a high independence between the input bit-streams, representing the same value, the quantization noise is affected. Multiple strategies are examined and Δ Σ -modulators with different designs are chosen as the best-suited solution. The operations are evaluated on a testbench.

  5. Development and testing of a Mudjet-augmented PDC bit.

    Energy Technology Data Exchange (ETDEWEB)

    Black, Alan (TerraTek, Inc.); Chahine, Georges (DynaFlow, Inc.); Raymond, David Wayne; Matthews, Oliver (Security DBS); Grossman, James W.; Bertagnolli, Ken (US Synthetic); Vail, Michael (US Synthetic)

    2006-01-01

    This report describes a project to develop technology to integrate passively pulsating, cavitating nozzles within Polycrystalline Diamond Compact (PDC) bits for use with conventional rig pressures to improve the rock-cutting process in geothermal formations. The hydraulic horsepower on a conventional drill rig is significantly greater than that delivered to the rock through bit rotation. This project seeks to leverage this hydraulic resource to extend PDC bits to geothermal drilling.

  6. Bit-commitment-based quantum coin flipping

    International Nuclear Information System (INIS)

    Nayak, Ashwin; Shor, Peter

    2003-01-01

    In this paper we focus on a special framework for quantum coin-flipping protocols, bit-commitment-based protocols, within which almost all known protocols fit. We show a lower bound of 1/16 for the bias in any such protocol. We also analyze a sequence of multiround protocols that tries to overcome the drawbacks of the previously proposed protocols in order to lower the bias. We show an intricate cheating strategy for this sequence, which leads to a bias of 1/4. This indicates that a bias of 1/4 might be optimal in such protocols, and also demonstrates that a more clever proof technique may be required to show this optimality

  7. Quantum bit commitment with cheat sensitive binding and approximate sealing

    Science.gov (United States)

    Li, Yan-Bing; Xu, Sheng-Wei; Huang, Wei; Wan, Zong-Jie

    2015-04-01

    This paper proposes a cheat-sensitive quantum bit commitment scheme based on single photons, in which Alice commits a bit to Bob. Here, Bob’s probability of success at cheating as obtains the committed bit before the opening phase becomes close to \\frac{1}{2} (just like performing a guess) as the number of single photons used is increased. And if Alice alters her committed bit after the commitment phase, her cheating will be detected with a probability that becomes close to 1 as the number of single photons used is increased. The scheme is easy to realize with present day technology.

  8. Improved Bit Rate Control for Real-Time MPEG Watermarking

    Directory of Open Access Journals (Sweden)

    Pranata Sugiri

    2004-01-01

    Full Text Available The alteration of compressed video bitstream due to embedding of digital watermark tends to produce unpredictable video bit rate variations which may in turn lead to video playback buffer overflow/underflow or transmission bandwidth violation problems. This paper presents a novel bit rate control technique for real-time MPEG watermarking applications. In our experiments, spread spectrum watermarks are embedded in the quantized DCT domain without requantization and motion reestimation to achieve fast watermarking. The proposed bit rate control scheme evaluates the combined bit lengths of a set of multiple watermarked VLC codewords, and successively replaces watermarked VLC codewords having the largest increase in bit length with their corresponding unmarked VLC codewords until a target bit length is achieved. The proposed method offers flexibility and scalability, which are neglected by similar works reported in the literature. Experimental results show that the proposed bit rate control scheme is effective in meeting the bit rate targets and capable of improving the watermark detection robustness for different video contents compressed at different bit rates.

  9. Bit rate and pulse width dependence of four-wave mixing of short optical pulses in semiconductor optical amplifiers

    DEFF Research Database (Denmark)

    Diez, S.; Mecozzi, A.; Mørk, Jesper

    1999-01-01

    We investigate the saturation properties of four-wave mixing of short optical pulses in a semiconductor optical amplifier. By varying the gain of the optical amplifier, we find a strong dependence of both conversion efficiency and signal-to-background ratio on pulse width and bit rate. In particu......We investigate the saturation properties of four-wave mixing of short optical pulses in a semiconductor optical amplifier. By varying the gain of the optical amplifier, we find a strong dependence of both conversion efficiency and signal-to-background ratio on pulse width and bit rate...

  10. Medical error

    African Journals Online (AJOL)

    QuickSilver

    is only when mistakes are recognised that learning can occur...All our previous medical training has taught us to fear error, as error is associated with blame. This fear may lead to concealment and this is turn can lead to fraud'. How real this fear is! All of us, during our medical training, have had the maxim 'prevention is.

  11. Modeling for write synchronization in bit patterned media recording

    Science.gov (United States)

    Lin, Maria Yu; Chan, Kheong Sann; Chua, Melissa; Zhang, Songhua; Kui, Cai; Elidrissi, Moulay Rachid

    2012-04-01

    Bit patterned media recording (BPMR) is a contender for next generation technology after conventional granular magnetic recording (CGMR) can no longer sustain the continued areal density growth. BPMR has several technological hurdles that need to be overcome, among them is solving the problem of write synchronization. With CGMR, grains are randomly distributed and occur almost all over the media. In contrast, BPMR has grains patterned into a regular lattice on the media with an approximate 50% duty cycle. Hence only about a quarter of the area is filled with magnetic material. During writing, the clock must be synchronized to the islands or the written in error rate becomes unacceptably large and the system fails. Maintaining synchronization during writing is a challenge as the system is not able to read and write simultaneously. Hence reading must occur periodically between the writing frequently enough to re-synchronize the writing clock to the islands. In this work, we study the requirements on the lengths of the synchronization and data sectors in a BPMR system using an advanced model for BPMR, and taking into consideration different spindle motor speed variations, which is the main cause of the mis-synchronization.

  12. Observational Evidence for Two Cosmological Predictions Made by Bit-String Physics; TOPICAL

    International Nuclear Information System (INIS)

    Noyes, H. Pierre

    2001-01-01

    A decade ago bit-string physics predicted that the baryon/photon ratio at the time of nucleogenesis(eta)= 1 1/256(sup 4) and that the dark matter/baryonic matter ratio(Omega)(sub DM)/(Omega)(sub B)= 12.7. Accepting that the normalized Hubble constant is constrained observationally to lie in the range 0.6 and lt; h(sub 0) and lt; 0.8, this translates into a prediction that 0.325 and gt;(Omega)(sub M) and gt; 0.183. This and a prediction by E.D. Jones, using a model-independent argument and ideas with which bit-string physics is not inconsistent, that the cosmological constant(Omega)(sub(Lambda))= 0.6(+-) 0.1 are in reasonable agreement with recent cosmological observations, including the BOOMERANG data

  13. Détection homodyne pour mémoires holographiques à stockage bit à bit

    Science.gov (United States)

    Maire, G.; Pauliat, G.; Roosen, G.

    2006-10-01

    Les mémoires holographiques à stockage bit à bit sont une alternative intéressante à l'approche holographique conventionnelle par pages de données du fait de leur architecture optique simplifiée. Nous proposons et validons ici une procédure de lecture adaptée à de telles mémoires et basée sur une détection homodyne de l'amplitude diffractée par les hologrammes. Ceci permet d'augmenter la quantité de signal utile détecté et s'avère donc prometteur pour accroître le taux de transfert de données de ces mémoires.

  14. The Deliverability of the BIT Programme at Lahti UAS in Training BIT Experts

    OpenAIRE

    Nghiem, Duc Long

    2014-01-01

    Information Technology has become a vital and indispensable part of business in every industry. In fact, IT is the primary factor that differentiates many businesses from their competitors. Organizations usually rely on IT for several strategic business solutions such as communication, information management, customer relationship management, and marketing. In the near future, the business labor force will anticipate a rising demand in BIT experts who possess both business expertise and IT sk...

  15. Influence of Transmitting Pointing Errors on High Speed WDM-AMI-Is-OWC Transmission System

    Science.gov (United States)

    Shatnawi, Abdallah Ahmad; Bin Mohd Warip, Mohd Nazri; Safar, Anuar Mat

    2017-12-01

    Inter-satellite communication is one of the revolutionary techniques that can be used to transmit the high speed date between satellites. However, space turbulences such as transmitting pointing errors play a significant role while designing inter-satellite communication systems. Those turbulences cause shutdown of inter-satellite link due to increase of attenuation during data transmission through link. The present work aims to develop an integrated data transmission system incorporating alternate mark inversion (AMI), wavelength division multiplexing (WDM), and polarization interleaving (PI) scheme for transmitting data 160 Gbps over inter-satellite link of 1,000 km under the influence of space turbulences. The performance of the integrated data transmission of 160 Gbps data up to 1,000 km will be evaluated under the influence of space turbulences by means of signal to noise ratio (SNR), total received power, bit error rate and eye diagram.

  16. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2011-06-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  17. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  18. Configurable Electronics with Low Noise and 14-bit Dynamic Range for Photodiode-based Photon Detectors

    CERN Document Server

    Müller, H; Yin, Z; Zhou, D; Cao, X; Li, Q; Liu, Y; Zou, F; Skaali, B; Awes, T C

    2006-01-01

    We describe the principles and measured performance characteristics of custom configurable 32-channel shaper/digitizer Front End Electronics (FEE) cards with 14-bit dynamic range for use with gain-adjustable photon detectors. The electronics has been designed for the PHOS calorimeter of ALICE with avalanche photodiode (APD) readout operated at -25 C ambient temperature and a signal shaping time of $1 {\\mu}s$. The electronics has also been adopted by the EMCal detector of ALICE with the same APD readout, but operated at an ambient temperature of +20 C and with a shaping time of 100ns. The CR-RC2 signal shapers on the FEE cards are implemented in discrete logic on a 10-layer board with two shaper sections for each input channel. The two shaper sections with gain ratio of 16:1 are digitized by 10-bit ADCs and provide an effective dynamic range of 14 bits. Gain adjustment for each individual APD is available through 32 bias voltage control registers of 10-bit range. The fixed gains and shaping times of the pole-z...

  19. Multiple Memory Structure Bit Reversal Algorithm Based on Recursive Patterns of Bit Reversal Permutation

    Directory of Open Access Journals (Sweden)

    K. K. L. B. Adikaram

    2014-01-01

    Full Text Available With the increasing demand for online/inline data processing efficient Fourier analysis becomes more and more relevant. Due to the fact that the bit reversal process requires considerable processing time of the Fast Fourier Transform (FFT algorithm, it is vital to optimize the bit reversal algorithm (BRA. This paper is to introduce an efficient BRA with multiple memory structures. In 2009, Elster showed the relation between the first and the second halves of the bit reversal permutation (BRP and stated that it may cause serious impact on cache performance of the computer, if implemented. We found exceptions, especially when the said index mapping was implemented with multiple one-dimensional memory structures instead of multidimensional or one-dimensional memory structure. Also we found a new index mapping, even after the recursive splitting of BRP into equal sized slots. The four-array and the four-vector versions of BRA with new index mapping reported 34% and 16% improvement in performance in relation to similar versions of Linear BRA of Elster which uses single one-dimensional memory structure.

  20. Bit-Wise Arithmetic Coding For Compression Of Data

    Science.gov (United States)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  1. Support research for development of improved geothermal drill bits

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, R.R.; Barker, L.M.; Green, S.J.; Winzenried, R.W.

    1977-06-01

    Progress in background research needed to develop drill bits for the geothermal environment is reported. Construction of a full-scale geothermal wellbore simulator and geothermal seal testing machine was completed. Simulated tests were conducted on full-scale bits. Screening tests on elastometric seals under geothermal conditions are reported. (JGB)

  2. Cross Institutional Cooperation on a Shared Bit Repository

    DEFF Research Database (Denmark)

    Zierau, Eld; Kejser, Ulla Bøgvad

    2010-01-01

    This paper explores how independent institutions, such as archives and libraries, can cooperate on managing a shared bit repository with bit preservation in order to use their resources for preservation n in a more cost-effective way. It uses the OAIS Reference Model to provide a framework...

  3. Circuit and interconnect design for high bit-rate applications

    NARCIS (Netherlands)

    Veenstra, H.

    2006-01-01

    This thesis presents circuit and interconnect design techniques and design flows that address the most difficult and ill-defined aspects of the design of ICs for high bit-rate applications. Bottlenecks in interconnect design, circuit design and on-chip signal distribution for high bit-rate

  4. APL portability in 16 bits microprocessors

    International Nuclear Information System (INIS)

    Cordova Costa, Felisa

    1981-01-01

    The present work deals with an automatic program translation method as a solution to the software portability problem. The source machine is a minicomputer of the SEMS MITRA range; the target machines are three 16 bits microprocessors: INTEL 8086, MOTOROLA 68000 and ZILOG Z-8000. The software to be translated is written in macro-assembly language (MAS) and consist of an operating system, an APL interpreter and some other software tools. The translation method uses a machine-free intermediate language describing the program in source language. This intermediate language consisting of a set of macro-instructions, is then assembled using a link library; this library defines the macro-instructions which create the target microprocessor object code. The whole translation operation work is carried out by the source machine which produces, after linkage editing, a table memory map (IME). Thereafter the load object code will be removed to target machine. Concerning optimization problems or inputs-outputs, some module can be written using the target machine assembly language and processed by a specific assembler in target machine or source machine, if the latter processes a cross-assembler; then the resulting binary codes are merged with the binary codes issued during the automatic translation phase. The method proposed here may be extended to any 16 bits computer, by a simple change of the macro-instruction library. This work allows an APL machine creation with microprocessors, preserving the original software and so maintaining its initial reliability. This work has led to a closer examination of hardware problems connected with the various target machines configurations. Difficulties met during this work mainly arise from different operations of the target machines specially indicators or flags setting, addressing modes and interruption mechanisms. This shows up the necessity to design new microprocessors either partially user's micro-programmable, or with some functions

  5. CAMAC based 4-channel 12-bit digitizer

    International Nuclear Information System (INIS)

    Srivastava, Amit K; Sharma, Atish; Raval, Tushar; Reddy, D Chenna

    2010-01-01

    With the development in Fusion research a large number of diagnostics are being used to understand the complex behaviour of plasma. During discharge, several diagnostics demand high sampling rate and high bit resolution to acquire data for rapid changes in plasma parameters. For the requirements of such fast diagnostics, a 4-channel simultaneous sampling, high-speed, 12-bit CAMAC digitizer has been designed and developed which has several important features for application in CAMAC based nuclear instrumentation. The module has independent ADC per channel for simultaneous sampling and digitization, and 512 Ksamples RAM per channel for on-board storage. The digitizer has been designed for event based acquisition and the acquisition window gives post-trigger as well as pre-trigger (software selectable) data that is useful for analysis. It is a transient digitizer and can be operated either in pre/post trigger mode or in burst mode. The record mode and the active memory size are selected through software commands to satisfy the current application. The module can be used to acquire data at high sampling rate for short time discharge e.g. 512 ms at 1MSPS. The module can also be used for long time discharge at low sampling rate e.g. 512 seconds at 1KSPS. This paper describes the design of digitizer module, development of VHDL code for hardware logic, Graphical User Interface (GUI) and important features of module from application point of view. The digitizer has CPLD based hardware logic, which provides flexibility in configuring the module for different sampling rates and different pre/post trigger samples through GUI. The digitizer can be operated with either internal (for testing/acquisition) or external (synchronized acquisition) clock and trigger. The digitizer has differential inputs with bipolar input range ±5V and it is being used with sampling rate of 1 MSamples Per Second (MSPS) per channel but it also supports higher sampling rate up to 3MSPS per channel. A

  6. Forward error correction based on algebraic-geometric theory

    CERN Document Server

    A Alzubi, Jafar; M Chen, Thomas

    2014-01-01

    This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.

  7. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  8. A Novel Least Significant Bit First Processing Parallel CRC Circuit

    Directory of Open Access Journals (Sweden)

    Xiujie Qu

    2013-01-01

    Full Text Available In HDLC serial communication protocol, CRC calculation can first process the most or least significant bit of data. Nowadays most CRC calculation is based on the most significant bit (MSB first processing. An algorithm of the least significant bit (LSB first processing parallel CRC is proposed in this paper. Based on the general expression of the least significant bit first processing serial CRC, using state equation method of linear system, we derive a recursive formula by the mathematical deduction. The recursive formula is applicable to any number of bits processed in parallel and any series of generator polynomial. According to the formula, we present the parallel circuit of CRC calculation and implement it with VHDL on FPGA. The results verify the accuracy and effectiveness of this method.

  9. A Memristor as Multi-Bit Memory: Feasibility Analysis

    Directory of Open Access Journals (Sweden)

    O. Bass

    2015-06-01

    Full Text Available The use of emerging memristor materials for advanced electrical devices such as multi-valued logic is expected to outperform today's binary logic digital technologies. We show here an example for such non-binary device with the design of a multi-bit memory. While conventional memory cells can store only 1 bit, memristors-based multi-bit cells can store more information within single device thus increasing the information storage density. Such devices can potentially utilize the non-linear resistance of memristor materials for efficient information storage. We analyze the performance of such memory devices based on their expected variations in order to determine the viability of memristor-based multi-bit memory. A design of read/write scheme and a simple model for this cell, lay grounds for full integration of memristor multi-bit memory cell.

  10. IMAGE STEGANOGRAPHY DENGAN METODE LEAST SIGNIFICANT BIT (LSB

    Directory of Open Access Journals (Sweden)

    M. Miftakul Amin

    2014-02-01

    Full Text Available Security in delivering a secret message is an important factor in the spread of information in cyberspace. Protecting that message to be delivered to the party entitled to, should be made a message concealment mechanism. The purpose of this study was to hide a secret text message into digital images in true color 24 bit RGB format. The method used to insert a secret message using the LSB (Least Significant Bit by replacing the last bit or 8th bit in each RGB color component. RGB image file types option considering that messages can be inserted capacity greater than if use a grayscale image, this is because in one pixel can be inserted 3 bits message. Tests provide results that are hidden messages into a digital image does not reduce significantly the quality of the digital image, and the message has been hidden can be extracted again, so that messages can be delivered to the recipient safely.

  11. Installation of MCNP on 64-bit parallel computers

    International Nuclear Information System (INIS)

    Meginnis, A.B.; Hendricks, J.S.; McKinney, G.W.

    1995-01-01

    The Monte Carlo radiation transport code MCNP has been successfully ported to two 64-bit workstations, the SGI and DEC Alpha. We found the biggest problem for installation on these machines to be Fortran and C mismatches in argument passing. Correction of these mismatches enabled, for the first time, dynamic memory allocation on 64-bit workstations. Although the 64-bit hardware is faster because 8-bytes are processed at a time rather than 4-bytes, we found no speed advantage in true 64-bit coding versus implicit double precision when porting an existing code to the 64-bit workstation architecture. We did find that PVM multiasking is very successful and represents a significant performance enhancement for scientific workstations

  12. Source-optimized irregular repeat accumulate codes with inherent unequal error protection capabilities and their application to scalable image transmission.

    Science.gov (United States)

    Lan, Ching-Fu; Xiong, Zixiang; Narayanan, Krishna R

    2006-07-01

    The common practice for achieving unequal error protection (UEP) in scalable multimedia communication systems is to design rate-compatible punctured channel codes before computing the UEP rate assignments. This paper proposes a new approach to designing powerful irregular repeat accumulate (IRA) codes that are optimized for the multimedia source and to exploiting the inherent irregularity in IRA codes for UEP. Using the end-to-end distortion due to the first error bit in channel decoding as the cost function, which is readily given by the operational distortion-rate function of embedded source codes, we incorporate this cost function into the channel code design process via density evolution and obtain IRA codes that minimize the average cost function instead of the usual probability of error. Because the resulting IRA codes have inherent UEP capabilities due to irregularity, the new IRA code design effectively integrates channel code optimization and UEP rate assignments, resulting in source-optimized channel coding or joint source-channel coding. We simulate our source-optimized IRA codes for transporting SPIHT-coded images over a binary symmetric channel with crossover probability p. When p = 0.03 and the channel code length is long (e.g., with one codeword for the whole 512 x 512 image), we are able to operate at only 9.38% away from the channel capacity with code length 132380 bits, achieving the best published results in terms of average peak signal-to-noise ratio (PSNR). Compared to conventional IRA code design (that minimizes the probability of error) with the same code rate, the performance gain in average PSNR from using our proposed source-optimized IRA code design is 0.8759 dB when p = 0.1 and the code length is 12800 bits. As predicted by Shannon's separation principle, we observe that this performance gain diminishes as the code length increases.

  13. A little bit of legal history

    CERN Multimedia

    2010-01-01

    On Monday 18 October, a little bit of legal history will be made when the first international tripartite agreement between CERN and its two Host States is signed. This agreement, which has been under negotiation since 2004, clarifies the working conditions of people employed by companies contracted to CERN. It will facilitate the management of service contracts both for CERN and its contractors.   Ever since 1965, when CERN first crossed the border into France, the rule of territoriality has applied. This means that anyone working for a company contracted to CERN whose job involves crossing the border is subject to the employment legislation of both states. The new agreement simplifies matters by making only one legislation apply per contract, that of the country in which most of the work is carried out. This is good for CERN, it’s good for the companies, and it’s good for their employees. It is something that all three parties to the agreement have wanted for some time, and I...

  14. Modern X86 assembly language programming 32-bit, 64-bit, SSE, and AVX

    CERN Document Server

    Kusswurm, Daniel

    2014-01-01

    Modern X86 Assembly Language Programming shows the fundamentals of x86 assembly language programming. It focuses on the aspects of the x86 instruction set that are most relevant to application software development. The book's structure and sample code are designed to help the reader quickly understand x86 assembly language programming and the computational capabilities of the x86 platform. Major topics of the book include the following: 32-bit core architecture, data types, internal registers, memory addressing modes, and the basic instruction setX87 core architecture, register stack, special

  15. Adaptive Error Resilience for Video Streaming

    Directory of Open Access Journals (Sweden)

    Lakshmi R. Siruvuri

    2009-01-01

    Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.

  16. Integer Representations towards Efficient Counting in the Bit Probe Model

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Greve, Mark; Pandey, Vineet

    2011-01-01

    Abstract We consider the problem of representing numbers in close to optimal space and supporting increment, decrement, addition and subtraction operations efficiently. We study the problem in the bit probe model and analyse the number of bits read and written to perform the operations, both...... in the worst-case and in the average-case. A counter is space-optimal if it represents any number in the range [0,...,2 n  − 1] using exactly n bits. We provide a space-optimal counter which supports increment and decrement operations by reading at most n − 1 bits and writing at most 3 bits in the worst......-case. To the best of our knowledge, this is the first such representation which supports these operations by always reading strictly less than n bits. For redundant counters where we only need to represent numbers in the range [0,...,L] for some integer L bits, we define the efficiency...

  17. Triple-Error-Correcting Codec ASIC

    Science.gov (United States)

    Jones, Robert E.; Segallis, Greg P.; Boyd, Robert

    1994-01-01

    Coder/decoder constructed on single integrated-circuit chip. Handles data in variety of formats at rates up to 300 Mbps, correcting up to 3 errors per data block of 256 to 512 bits. Helps reduce cost of transmitting data. Useful in many high-data-rate, bandwidth-limited communication systems such as; personal communication networks, cellular telephone networks, satellite communication systems, high-speed computing networks, broadcasting, and high-reliability data-communication links.

  18. Micromagnetic studies for bit patterned media above 2 Tbit/in.2

    International Nuclear Information System (INIS)

    Zhang Kaiming; Wei Dan

    2012-01-01

    Bit patterned media (BPM) recording is a candidate for extremely high density magnetic recording. A micromagnetic model is built up to analyze the phase diagram of the correct-write-in condition in BPM above 2 Tb/in. 2 fabricated by lithography or ion irradiation methods. The target of the study is to acquire the relationship between the recording performance and the magnetic properties of the media. The medium includes the polycrystalline grains and grain boundary. In BPM fabricated by lithography with FCT structure, two phase diagrams of the correct-write-in condition are found for the anisotropy angular distribution Δθ, the ratio of tetragonal anisotropy K 22 to uniaxial anisotropy K 1 and the uniaxial anisotropy distribution ΔK 1 . In BPM fabricated by ion irradiation methods, two phase diagrams of the correct-write-in condition are analyzed for the ratio of saturation magnetization M' s /M s , anisotropy field H' k /H k and the exchange field H' ex /H ex in the ion irradiated region and the bit islands. - Research highlights: → Two types of BPM, regular or with irradiated regions, are studied using micromagnetics. → Optimum parameters are found for regular BPM with non-magnetic inter-bit spacing. → Recording phase diagrams versus M s , H k and A * in irradiated regions are found.

  19. The Economics of BitCoin Price Formation

    OpenAIRE

    Ciaian, Pavel; Rajcaniova, Miroslava; Kancs, d'Artis

    2014-01-01

    This is the first article that studies BitCoin price formation by considering both the traditional determinants of currency price, e.g., market forces of supply and demand, and digital currencies specific factors, e.g., BitCoin attractiveness for investors and users. The conceptual framework is based on the Barro (1979) model, from which we derive testable hypotheses. Using daily data for five years (2009–2015) and applying time-series analytical mechanisms, we find that market forces and Bit...

  20. Fitness Probability Distribution of Bit-Flip Mutation.

    Science.gov (United States)

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  1. Different Mass Processing Services in a Bit Repository

    DEFF Research Database (Denmark)

    Jurik, Bolette; Zierau, Eld

    2011-01-01

    This paper investigates how a general bit repository mass processing service using different programming models and platforms can be specified. Such a service is needed in large data archives, especially libraries, where different ways of doing mass processing is needed for different digital...... library tasks. Different hardware platforms as basis for mass processing will usually already exist for libraries as part of a bit preservation solution for long term bit preservation. The investigation of a general mass processing service shows that different aspects of mass processing are too dependent...

  2. Bit Manipulation Accelerator for Communication Systems Digital Signal Processor

    Directory of Open Access Journals (Sweden)

    Jeong Sug H

    2005-01-01

    Full Text Available This paper proposes application-specific instructions and their bit manipulation unit (BMU, which efficiently support scrambling, convolutional encoding, puncturing, interleaving, and bit stream multiplexing. The proposed DSP employs the BMU supporting parallel shift and XOR (exclusive-OR operations and bit insertion/extraction operations on multiple data. The proposed architecture has been modeled by VHDL and synthesized using the SEC 0.18 m standard cell library and the gate count of the BMU is only about 1700 gates. Performance comparisons show that the number of clock cycles can be reduced about for scrambling, convolutional encoding, and interleaving compared with existing DSPs.

  3. Refractive Errors

    Science.gov (United States)

    ... Conditions Frequently Asked Questions Español Condiciones Chinese Conditions Refractive Errors in Children En Español Read in Chinese How does the ... birth and can occur at any age. The prevalence of myopia is low in US children under the age of eight, but much higher ...

  4. 2015 Big Windy, Oregon 4-Band 8 Bit Imagery

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data are LiDAR orthorectified aerial photographs (8-bit GeoTIFF format) within the Oregon Lidar Consortium Big Windy project area. The imagery coverage is...

  5. 2014 Metro, Oregon 4-Band 8 Bit Imagery

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data are LiDAR orthorectified aerial photographs (8-bit GeoTIFF format) within the Oregon Lidar Consortium Portland project area. The imagery coverage is...

  6. Experimental bit commitment based on quantum communication and special relativity.

    Science.gov (United States)

    Lunghi, T; Kaniewski, J; Bussières, F; Houlmann, R; Tomamichel, M; Kent, A; Gisin, N; Wehner, S; Zbinden, H

    2013-11-01

    Bit commitment is a fundamental cryptographic primitive in which Bob wishes to commit a secret bit to Alice. Perfectly secure bit commitment between two mistrustful parties is impossible through asynchronous exchange of quantum information. Perfect security is however possible when Alice and Bob split into several agents exchanging classical and quantum information at times and locations suitably chosen to satisfy specific relativistic constraints. Here we report on an implementation of a bit commitment protocol using quantum communication and special relativity. Our protocol is based on [A. Kent, Phys. Rev. Lett. 109, 130501 (2012)] and has the advantage that it is practically feasible with arbitrary large separations between the agents in order to maximize the commitment time. By positioning agents in Geneva and Singapore, we obtain a commitment time of 15 ms. A security analysis considering experimental imperfections and finite statistics is presented.

  7. Instrumented Bit for In-Situ Spectroscopy (IBISS), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to build and critically test the Instrumented Bit for In-Situ Spectroscopy (IBISS), a novel system for in-situ, rapid analyses of planetary subsurface...

  8. 2012 Sandy River, Oregon Natural Color 8 Bit Imagery

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data are LiDAR orthorectified aerial photographs (8-bit GeoTIFF format) within the Oregon Lidar Consortium Sandy River project area. The imagery coverage is...

  9. Pseudo-random bit generator based on Chebyshev map

    Science.gov (United States)

    Stoyanov, B. P.

    2013-10-01

    In this paper, we study a pseudo-random bit generator based on two Chebyshev polynomial maps. The novel derivative algorithm shows perfect statistical properties established by number of statistical tests.

  10. Fast physical random bit generation with chaotic semiconductor lasers

    Science.gov (United States)

    Uchida, Atsushi; Amano, Kazuya; Inoue, Masaki; Hirano, Kunihito; Naito, Sunao; Someya, Hiroyuki; Oowada, Isao; Kurashige, Takayuki; Shiki, Masaru; Yoshimori, Shigeru; Yoshimura, Kazuyuki; Davis, Peter

    2008-12-01

    Random number generators in digital information systems make use of physical entropy sources such as electronic and photonic noise to add unpredictability to deterministically generated pseudo-random sequences. However, there is a large gap between the generation rates achieved with existing physical sources and the high data rates of many computation and communication systems; this is a fundamental weakness of these systems. Here we show that good quality random bit sequences can be generated at very fast bit rates using physical chaos in semiconductor lasers. Streams of bits that pass standard statistical tests for randomness have been generated at rates of up to 1.7 Gbps by sampling the fluctuating optical output of two chaotic lasers. This rate is an order of magnitude faster than that of previously reported devices for physical random bit generators with verified randomness. This means that the performance of random number generators can be greatly improved by using chaotic laser devices as physical entropy sources.

  11. Statistical mechanics of error-correcting codes

    Science.gov (United States)

    Kabashima, Y.; Saad, D.

    1999-01-01

    We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

  12. Content Progressive Coding of Limited Bits/pixel Images

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Forchhammer, Søren

    1999-01-01

    A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF.......A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF....

  13. 8-Bit Gray Scale Images of Fingerprint Image Groups

    Science.gov (United States)

    NIST 8-Bit Gray Scale Images of Fingerprint Image Groups (Web, free access)   The NIST database of fingerprint images contains 2000 8-bit gray scale fingerprint image pairs. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  14. Quantum states representing perfectly secure bits are always distillable

    International Nuclear Information System (INIS)

    Horodecki, Pawel; Augusiak, Remigiusz

    2006-01-01

    It is proven that recently introduced states with perfectly secure bits of cryptographic key (private states representing secure bit) [K. Horodecki et al., Phys. Rev. Lett. 94, 160502 (2005)] as well as its multipartite and higher dimension generalizations always represent distillable entanglement. The corresponding lower bounds on distillable entanglement are provided. We also present a simple alternative proof that for any bipartite quantum state entanglement cost is an upper bound on a distillable cryptographic key in a bipartite scenario

  15. VCSEL Scaling, Laser Integration on Silicon, and Bit Energy

    Science.gov (United States)

    2017-03-01

    Silicon Photonics: Figure 1 shows the electronic circuitry and comparison key to analyzing photonic bit energies for transceivers used in data centers...VCSEL Scaling, Laser Integration on Silicon , and Bit Energy D.G. Deppe,1,2 Ja. Leshin,1 and Je. Leshin1 1CREOL, College of Optics & Photonics...laser; (000.0000) General [For codes, see www.opticsinfobase.org/submit/ocis.] Keywords: VCSELs, Nanoscale lasers, optical interconnects, silicon

  16. Comparison and status of 32 bit backplane bus architectures

    International Nuclear Information System (INIS)

    Muller, K.D.

    1985-01-01

    With the introduction of 32 bit microprocessors several new 32 bit backplane bus architectures have been developed and are in the process for standardization. Among these are Future Bus (IEEE P896.1), VME-Bus (IEEE 1014), MULTIBUS II, Nu-Bus and Fastbus (IEEE 960). The paper describes and compares the main features of these bus architectures and mentions the status of national and international standardization efforts

  17. 4-bit digital to analog converter using R-2R ladder and binary weighted resistors

    Science.gov (United States)

    Diosanto, J.; Batac, M. L.; Pereda, K. J.; Caldo, R.

    2017-06-01

    The use of a 4-bit digital-to-analog converter using two methods; Binary Weighted Resistors and R-2R Ladder is designed and presented in this paper. The main components that were used in constructing both circuits were different resistor values, operational amplifier (LM741) and single pole double throw switches. Both circuits were designed using MULTISIM software to be able to test the circuit for its ideal application and FRITZING software for the layout designing and fabrication to the printed circuit board. The implementation of both systems in an actual circuit benefits in determining and comparing the advantages and disadvantages of each. It was realized that the binary weighted circuit is more efficient DAC, having lower percentage error of 0.267% compared to R-2R ladder circuit which has a minimum of percentage error of 4.16%.

  18. Scavenging ratios

    International Nuclear Information System (INIS)

    Krey, P.W.; Toonkel, L.E.

    1977-01-01

    Total 90 Sr fallout is adjusted for dry deposition, and scavenging ratios are calculated at Seattle, New York, and Fayetteville, Ark. Stable-lead scavenging ratios are also presented for New York. These ratios show large scatter, but average values are generally inversely proportional to precipitation. Stable-lead ratios decrease more rapidly with precipitation than do those of 90 Sr, a decrease reflecting a lesser availability of lead to the scavenging processes

  19. The error performance analysis over cyclic redundancy check codes

    Science.gov (United States)

    Yoon, Hee B.

    1991-06-01

    The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.

  20. Golden Ratio

    Indian Academy of Sciences (India)

    Our attraction to another body increases if the body is symmetricaland in proportion. If a face or a structure is in proportion,we are more likely to notice it and find it beautiful.The universal ratio of beauty is the 'Golden Ratio', found inmany structures. This ratio comes from Fibonacci numbers.In this article, we explore this ...

  1. Partial Transition Sequence Algorithms for Reducing Peak to Average Power Ratio in the Next Generation Wireless Communications Systems

    Directory of Open Access Journals (Sweden)

    Mokhtaria Mesri

    2017-03-01

    Full Text Available The unprecedented scientific and technical advancements along with the ever-growing needs of humanity resulted in a revolution in the field of communication. Hence, single carrier waves are being replaced by multi-carrier systems like Orthogonal Frequency Division Multiplexing (OFDM and Generalized Frequency Division Multiplexing (GFDM which are nowadays commonly implemented. In the OFDM system, orthogonally placed subcarriers are used to carry the data from the transmitter to the receiver end. The presence of guard band in these systems helps in dealing with the problem of intersymbol interference (ISI and noise is minimized by the larger number of subcarriers. However, the large Peak to Average Power Ratio (PAPR of these signals has undesirable effects on the system. PAPR itself can cause interference and degradation of Bit Error Rate (BER. To reduce High Peak to Average Power Ratio and Bit Error Rate problems, more techniques are used. Furthermore, each technique has its own disadvantages, such as complexity in-band distortion and out-of-band radiation into OFDM and GFDM signals. In this paper, the emphasis will be put on the GFDM systems as well as on the methods that are meant to reduce the PAPR problem and improve efficiency.

  2. Binary Biometrics: An Analytic Framework to Estimate the Bit Error Probability under Gaussian Assumption

    NARCIS (Netherlands)

    Kelkboom, E.J.C.; Molina, G.; Kevenaar, T.A.M.; Veldhuis, Raymond N.J.; Jonker, Willem

    2008-01-01

    In recent years the protection of biometric data has gained increased interest from the scientific community. Methods such as the helper data system, fuzzy extractors, fuzzy vault and cancellable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic

  3. Transmission modulation system for mobile phone reduces battery consumption without increasing bit error rate

    NARCIS (Netherlands)

    Moretti, M.; Janssen, G.J.M.

    2000-01-01

    The transmission modulation system minimizes the wasted 'out of band' power. The digital data (1) to be transmitted is fed via a pulse response filter (2) to a mixer (4) where it modulates a carrier wave (4). The digital data is also fed via a delay circuit (5) and identical filter (6) to a second

  4. Bower ratio-energy balance associated errors in vineyards under dripping irrigation Erros associados pela razão de bowen ao balanço de energia em parreirais sob irrigação por gotejamento

    Directory of Open Access Journals (Sweden)

    José Monteiro Soares

    2007-08-01

    Full Text Available This study was conducted at the Bebedouro Experimental Station in Petrolina-PE, Brazil, to evaluate the errors associated to the application of the Bowen ratio-energy balance in a 3-years old vineyard (Vitis vinifera, L, grown in a trellis system, irrigated by dripping. The field measurements were taken during fruiting cycle (July to November, 2001, which was divided into eigth phenological stages. A micrometeorological tower was mounted in a grape-plants row in which sensors of net radiation, global solar radiation and wind speed were installed at about 1.0 m above the canopy. Also in the tower, two psicometers were installed at two levels (0.5 and 1.8 m above the vineyard canopy. Two soil heat flux plates were buried at 0.02 m beneath the soil surface. All these sensors were connected to a Data logger 21 X of Campbell Scientific Inc., programmed for collecting data once every 5 seconds and storage averages for every 15 minutes. A comparative analysis were made among four Bowen ratio accepting/rejecting rules, according to the methodology proposed by Spano et al. (2000: betar1 - values of beta calculated by Bowen (1926 equation; betar2 - values of beta as proposed by Verma et al. (1978 equation; betar3 - exclusion of the beta values obtained as recommended by Unland et al. (1996 and betar4 - exclusion of the beta values calculated as proposed by Bowen (1926, out of the interval (-0.7 Este estudo foi conduzido na Estação Experimental de Bebedouro, Petrolina-PE, Brasil, para avaliar os erros associados com a aplicação do balanço de energia com base na razão de Bowen em um parreiral (Vitis vinifera, L com três anos de idade, conduzido em latada, sob irrigação por gotejamento. As medições foram feitas durante o ciclo produtivo de julho a novembro de 2001, que foi dividido em oito estádios fenológicos. Numa torre micrometeorológica localizada no centro do parreiral, foram instalados a 1,00 m acima do dossel da videira, os seguintes

  5. A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems

    Directory of Open Access Journals (Sweden)

    Dong Shi-Wei

    2007-01-01

    Full Text Available A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL test loops show that the proposed algorithm is efficient for practical DMT transmissions.

  6. A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems

    Science.gov (United States)

    Zhu, Li-Ping; Yao, Yan; Zhou, Shi-Dong; Dong, Shi-Wei

    2007-12-01

    A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT) systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL) test loops show that the proposed algorithm is efficient for practical DMT transmissions.

  7. Demonstration of error-free 25Gb/s duobinary transmission using a colourless reflective integrated modulator.

    Science.gov (United States)

    Lai, Caroline P; Naughton, Alan; Ossieur, Peter; Antony, Cleitus; Smith, David W; Borghesani, Anna; Moodie, David G; Maxwell, Graeme; Healey, Peter; Poustie, Alistair; Townsend, Paul D

    2013-01-14

    To realise novel, low-cost, photonic technologies that can support 100Gb/s Ethernet in next-generation dense wavelength-division-multiplexed metro transport networks, we are developing arrayed photonic integrated circuits that leverage colourless reflective modulators. Here, we demonstrate a single-channel, hybrid reflective electroabsorption modulator-based device, showing error-free 25.3Gb/s duobinary transmission with bit-error rates less than 1 × 10(-12) over 35km of standard single-mode fibre. We further confirm the modulator's colourless operation over the ITU C-band, with a 1.2dB variation in required optical signal-to-noise ratio over this wavelength range.

  8. Performance Analysis of Multi-Hop Heterodyne FSO Systems over Malaga Turbulent Channels with Pointing Error Using Mixture Gamma Distribution

    KAUST Repository

    Alheadary, Wael Ghazy

    2017-11-16

    This work investigates the end-to-end performance of a free space optical amplify-and-forward relaying system using heterodyne detection over Malaga turbulence channels at the presence of pointing error. In order to overcome the analytical difficulties of the proposed composite channel model, we employed the mixture Gamma (MG) distribution. The proposed model shows a high accurate and tractable approximation just by adjusting some parameters. More specifically, we derived new closed-form expression for average bit error rate employing rectangular quadrature amplitude modulation in term of MG distribution and generalized power series of the Meijer\\'s G- function. The closed-form has been validated numerically and asymptotically at high signal to noise ratio.

  9. Performance analysis of relay-assisted all-optical FSO networks over strong atmospheric turbulence channels with pointing errors

    KAUST Repository

    Yang, Liang

    2014-12-01

    In this study, we consider a relay-assisted free-space optical communication scheme over strong atmospheric turbulence channels with misalignment-induced pointing errors. The links from the source to the destination are assumed to be all-optical links. Assuming a variable gain relay with amplify-and-forward protocol, the electrical signal at the source is forwarded to the destination with the help of this relay through all-optical links. More specifically, we first present a cumulative density function (CDF) analysis for the end-to-end signal-to-noise ratio. Based on this CDF, the outage probability, bit-error rate, and average capacity of our proposed system are derived. Results show that the system diversity order is related to the minimum value of the channel parameters.

  10. Effect of Pointing Error on the BER Performance of an Optical CDMA FSO Link with SIK Receiver

    Science.gov (United States)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2017-12-01

    An analytical approach is presented for an optical code division multiple access (OCDMA) system over free space optical (FSO) channel considering the effect of pointing error between the transmitter and the receiver. Analysis is carried out with an optical sequence inverse keying (SIK) correlator receiver with intensity modulation and direct detection (IM/DD) to find the bit error rate (BER) with pointing error. The results are evaluated numerically in terms of signal-to-noise plus multi-access interference (MAI) ratio, BER and power penalty due to pointing error. It is noticed that the OCDMA FSO system is highly affected by pointing error with significant power penalty at a BER of 10-6 and 10-9. For example, penalty at BER 10-9 is found to be 9 dB corresponding to normalized pointing error of 1.4 for 16 users with processing gain of 256 and is reduced to 6.9 dB when the processing gain is increased to 1,024.

  11. A 12-bit 500KSPS cyclic ADC for CMOS image sensor

    Science.gov (United States)

    Li, Zhaohan; Wang, Gengyun; Peng, Leli; Ma, Cheng; Chang, Yuchun

    2015-03-01

    At present, single-slope analog-to-digital convertor (ADC) is widely used in the readout circuits of CMOS image sensor (CIS) while its main drawback is the high demand for the system clock frequency. The more pixels and higher ADC resolution the image sensor system needs, the higher system clock frequency is required. To overcome this problem in high dynamic range CIS system, this paper presents a 12-bit 500-KS/s cyclic ADC, in which the system clock frequency is 5MHz. Therefore, comparing with the system frequency of 2N×fS for the single-slope ADC, where fS, N is the sampling frequency and resolution, respectively, the higher ADC resolution doesn't need the higher system clock frequency. With 0.18μm CMOS process, the circuit layout is realized and occupies an area of 8μm×374μm. Post simulation results show that Signal-to-Noise-and-Distortion-Ratio (SNDR) and Efficient Number of Bit (ENOB) reaches 63.7dB and 10.3bit, respectively.

  12. SpecBit, DecayBit and PrecisionBit: GAMBIT modules for computing mass spectra, particle decay rates and precision observables

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin

    2018-01-01

    We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.

  13. Entanglement and Quantum Error Correction with Superconducting Qubits

    Science.gov (United States)

    Reed, Matthew

    2015-03-01

    Quantum information science seeks to take advantage of the properties of quantum mechanics to manipulate information in ways that are not otherwise possible. Quantum computation, for example, promises to solve certain problems in days that would take a conventional supercomputer the age of the universe to decipher. This power does not come without a cost however, as quantum bits are inherently more susceptible to errors than their classical counterparts. Fortunately, it is possible to redundantly encode information in several entangled qubits, making it robust to decoherence and control imprecision with quantum error correction. I studied one possible physical implementation for quantum computing, employing the ground and first excited quantum states of a superconducting electrical circuit as a quantum bit. These ``transmon'' qubits are dispersively coupled to a superconducting resonator used for readout, control, and qubit-qubit coupling in the cavity quantum electrodynamics (cQED) architecture. In this talk I will give an general introduction to quantum computation and the superconducting technology that seeks to achieve it before explaining some of the specific results reported in my thesis. One major component is that of the first realization of three-qubit quantum error correction in a solid state device, where we encode one logical quantum bit in three entangled physical qubits and detect and correct phase- or bit-flip errors using a three-qubit Toffoli gate. My thesis is available at arXiv:1311.6759.

  14. Injecting Errors for Testing Built-In Test Software

    Science.gov (United States)

    Gender, Thomas K.; Chow, James

    2010-01-01

    Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers

  15. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  16. Relating drilling parameters at the bit-rock interface: theoretical and field studies

    Energy Technology Data Exchange (ETDEWEB)

    Sinkala, T. (Luleaa University of Technology, Luleaa (Sweden). Division of Mining Equipment Engineering)

    1991-01-01

    An explicit expression relating drilling parameters at the bit-rock contact is derived. The expression estimates the minimum torque, related to bit-rock contact only, required to maintain constant bit rotation. Theoretical results from the developed bit-rock contact relation agree very satisfactorily with those obtained from field tests and other previous experience. 15 refs., 8 figs., 2 tabs.

  17. Test plan for core sampling drill bit temperature monitor

    International Nuclear Information System (INIS)

    Francis, P.M.

    1994-01-01

    At WHC, one of the functions of the Tank Waste Remediation System division is sampling waste tanks to characterize their contents. The push-mode core sampling truck is currently used to take samples of liquid and sludge. Sampling of tanks containing hard salt cake is to be performed with the rotary-mode core sampling system, consisting of the core sample truck, mobile exhauster unit, and ancillary subsystems. When drilling through the salt cake material, friction and heat can be generated in the drill bit. Based upon tank safety reviews, it has been determined that the drill bit temperature must not exceed 180 C, due to the potential reactivity of tank contents at this temperature. Consequently, a drill bit temperature limit of 150 C was established for operation of the core sample truck to have an adequate margin of safety. Unpredictable factors, such as localized heating, cause this buffer to be so great. The most desirable safeguard against exceeding this threshold is bit temperature monitoring . This document describes the recommended plan for testing the prototype of a drill bit temperature monitor developed for core sampling by Sandia National Labs. The device will be tested at their facilities. This test plan documents the tests that Westinghouse Hanford Company considers necessary for effective testing of the system

  18. BitCube: A Bottom-Up Cubing Engineering

    Science.gov (United States)

    Ferro, Alfredo; Giugno, Rosalba; Puglisi, Piera Laura; Pulvirenti, Alfredo

    Enhancing on line analytical processing through efficient cube computation plays a key role in Data Warehouse management. Hashing, grouping and mining techniques are commonly used to improve cube pre-computation. BitCube, a fast cubing method which uses bitmaps as inverted indexes for grouping, is presented. It horizontally partitions data according to the values of one dimension and for each resulting fragment it performs grouping following bottom-up criteria. BitCube allows also partial materialization based on iceberg conditions to treat large datasets for which a full cube pre-computation is too expensive. Space requirement of bitmaps is optimized by applying an adaption of the WAH compression technique. Experimental analysis, on both synthetic and real datasets, shows that BitCube outperforms previous algorithms for full cube computation and results comparable on iceberg cubing.

  19. Security bound of cheat sensitive quantum bit commitment.

    Science.gov (United States)

    He, Guang Ping

    2015-03-23

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.

  20. Security bound of cheat sensitive quantum bit commitment

    Science.gov (United States)

    He, Guang Ping

    2015-03-01

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.

  1. Low Bit Rate Motion Video Coder/Decoder For Teleconferencing

    Science.gov (United States)

    Koga, T.; Niwa, K.; Iijima, Y.; Iinuma, K.

    1987-07-01

    This paper describes motion video compression transmission for teleconferencing at a subprimary rate, i.e., at 384 kbits/s, including audio signal through the integrated services digital network (ISDN) HO channel. A subprimary rate video coder/decoder (codec), NETEC-XV, is available commercially that can operate at any bit rate (in multiples of 64 kbits/s) from 384 to 2048 kbits/s. In this paper, new algorithms are described that have been very useful in lowering the bit rate to 384 kbits/s. These algorithms are (1) separation of moving and still parts, followed by encoding of the two parts using different sets of parameters, and (2) scene change detection and its application to encoding parameter control. According to a brief subjective evaluation, the codec provides good picture quality even at a transmission bit rate of 384 kbits/s.

  2. The Cryptographic Security of the Sum of Bits

    Science.gov (United States)

    1984-06-01

    Shamir, "On the Cryptographic Security of Single RSA bits," 15th STOC, lg83, pp.421-430. 3. L. Blum , M. Blum , and M. Shub , "A Simple Secure Pseudo...our paper we consider the sum of bits predicate and show that it is secure in the same sense. Blum , Blum , and Shub3 have designed a pseudo-random...Combining these two facts we obtain the following: Fact 3: E(x)=E(x2-1 mod N) if and only if E(r)+ 1 = E(r+ k). If x and y are non-negative integers, let C

  3. Foldable Instrumented Bits for Ultrasonic/Sonic Penetrators

    Science.gov (United States)

    Bar-Cohen, Yoseph; Badescu, Mircea; Iskenderian, Theodore; Sherrit, Stewart; Bao, Xiaoqi; Linderman, Randel

    2010-01-01

    Long tool bits are undergoing development that can be stowed compactly until used as rock- or ground-penetrating probes actuated by ultrasonic/sonic mechanisms. These bits are designed to be folded or rolled into compact form for transport to exploration sites, where they are to be connected to their ultrasonic/ sonic actuation mechanisms and unfolded or unrolled to their full lengths for penetrating ground or rock to relatively large depths. These bits can be designed to acquire rock or soil samples and/or to be equipped with sensors for measuring properties of rock or soil in situ. These bits can also be designed to be withdrawn from the ground, restowed, and transported for reuse at different exploration sites. Apparatuses based on the concept of a probe actuated by an ultrasonic/sonic mechanism have been described in numerous prior NASA Tech Briefs articles, the most recent and relevant being "Ultrasonic/ Sonic Impacting Penetrators" (NPO-41666) NASA Tech Briefs, Vol. 32, No. 4 (April 2008), page 58. All of those apparatuses are variations on the basic theme of the earliest ones, denoted ultrasonic/sonic drill corers (USDCs). To recapitulate: An apparatus of this type includes a lightweight, low-power, piezoelectrically driven actuator in which ultrasonic and sonic vibrations are generated and coupled to a tool bit. The combination of ultrasonic and sonic vibrations gives rise to a hammering action (and a resulting chiseling action at the tip of the tool bit) that is more effective for drilling than is the microhammering action of ultrasonic vibrations alone. The hammering and chiseling actions are so effective that the size of the axial force needed to make the tool bit advance into soil, rock, or another material of interest is much smaller than in ordinary twist drilling, ordinary hammering, or ordinary steady pushing. Examples of properties that could be measured by use of an instrumented tool bit include electrical conductivity, permittivity, magnetic

  4. Transistor device for multi-bit non-volatile storage

    International Nuclear Information System (INIS)

    Tan, S.G.; Jalil, M.B.A.; Kumar, Vimal; Liew, Thomas; Teo, K.L.; Chong, T.C.

    2006-01-01

    We propose a transistor model that incorporates multiple storage elements within a single transistor device. This device is thus smaller in size compared to the magnetoresistive random access memory (MRAM) with the same number of storage bits. The device model can function in both the current as well as voltage detection mode. Simulations were carried out at higher temperature, taking into consideration the spread of electron density above the Fermi level. We found that linear detection of conductance variation with the stored binary value can be achieved for a 3-bit storage device up to a temperature of 350 K

  5. How to deal with malleability of BitCoin transactions

    OpenAIRE

    Andrychowicz, Marcin; Dziembowski, Stefan; Malinowski, Daniel; Mazurek, Łukasz

    2013-01-01

    BitCoin transactions are malleable in a sense that given a transaction an adversary can easily construct an equivalent transaction which has a different hash. This can pose a serious problem in some BitCoin distributed contracts in which changing a transaction's hash may result in the protocol disruption and a financial loss. The problem mostly concerns protocols, which use a "refund" transaction to withdraw a deposit in a case of the protocol interruption. In this short note, we show a gener...

  6. How to Convert a Flavor of Quantum Bit Commitment

    DEFF Research Database (Denmark)

    Crepeau, Claude; Legare, Frédéric; Salvail, Louis

    2001-01-01

    In this paper we show how to convert a statistically binding but computationally concealing quantum bit commitment scheme into a computationally binding but statistically concealing QBC scheme. For a security parameter n, the construction of the statistically concealing scheme requires O(n2......) executions of the statistically binding scheme. As a consequence, statistically concealing but computationally binding quantum bit commitments can be based upon any family of quantum one-way functions. Such a construction is not known to exist in the classical world....

  7. Development of a jet-assisted polycrystalline diamond drill bit

    Energy Technology Data Exchange (ETDEWEB)

    Pixton, D.S.; Hall, D.R.; Summers, D.A.; Gertsch, R.E.

    1997-12-31

    A preliminary investigation has been conducted to evaluate the technical feasibility and potential economic benefits of a new type of drill bit. This bit transmits both rotary and percussive drilling forces to the rock face, and augments this cutting action with high-pressure mud jets. Both the percussive drilling forces and the mud jets are generated down-hole by a mud-actuated hammer. Initial laboratory studies show that rate of penetration increases on the order of a factor of two over unaugmented rotary and/or percussive drilling rates are possible with jet-assistance.

  8. Hanford coring bit temperature monitor development testing results report

    International Nuclear Information System (INIS)

    Rey, D.

    1995-05-01

    Instrumentation which directly monitors the temperature of a coring bit used to retrieve core samples of high level nuclear waste stored in tanks at Hanford was developed at Sandia National Laboratories. Monitoring the temperature of the coring bit is desired to enhance the safety of the coring operations. A unique application of mature technologies was used to accomplish the measurement. This report documents the results of development testing performed at Sandia to assure the instrumentation will withstand the severe environments present in the waste tanks

  9. Cloning the entanglement of a pair of quantum bits

    International Nuclear Information System (INIS)

    Lamoureux, Louis-Philippe; Navez, Patrick; Cerf, Nicolas J.; Fiurasek, Jaromir

    2004-01-01

    It is shown that any quantum operation that perfectly clones the entanglement of all maximally entangled qubit pairs cannot preserve separability. This 'entanglement no-cloning' principle naturally suggests that some approximate cloning of entanglement is nevertheless allowed by quantum mechanics. We investigate a separability-preserving optimal cloning machine that duplicates all maximally entangled states of two qubits, resulting in 0.285 bits of entanglement per clone, while a local cloning machine only yields 0.060 bits of entanglement per clone

  10. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  11. Joint adaptive modulation and diversity combining with feedback error compensation

    KAUST Repository

    Choi, Seyeong

    2009-11-01

    This letter investigates the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of error-free feedback channels. We also propose to utilize adaptive diversity to compensate for the performance degradation due to feedback error. We accurately quantify the performance of the joint AMDC scheme in the presence of feedback error, in terms of the average number of combined paths, the average spectral efficiency, and the average bit error rate. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. It is observed that the proposed compensation strategy can offer considerable error performance improvement with little loss in processing power and spectral efficiency in comparison with the no compensation case. Copyright © 2009 IEEE.

  12. Fixed-point error analysis of Winograd Fourier transform algorithms

    Science.gov (United States)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  13. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  14. Improved differential pulse code modulation-block truncation coding method adopting two-level mean squared error near-optimal quantizers

    Science.gov (United States)

    Choi, Kang-Sun; Ko, Sung-Jea

    2011-04-01

    The conventional hybrid method of block truncation coding (BTC) and differential pulse code modulation (DPCM), namely the DPCM-BTC method, offers better rate-distortion performance than the standard BTC. However, the quantization error in the hybrid method is easily increased for large block sizes due to the use of two representative levels in BTC. In this paper, we first derive a bivariate quadratic function representing the mean squared error (MSE) between the original block and the block reconstructed in the DPCM framework. The near-optimal representatives obtained by quantizing the minimum of the derived function can prevent the rapid increase of the quantization error. Experimental results show that the proposed method improves peak signal-to-noise ratio performance by up to 2dB at 1.5 bit/pixel (bpp) and by 1.2dB even at a low bit rate of 1.1 bpp as compared with the DPCM-BTC method without optimization. Even with the additional computation for the quantizer optimization, the computational complexity of the proposed method is still much lower than those of transform-based compression techniques.

  15. Efficient error estimation in quantum key distribution

    Science.gov (United States)

    Li, Mo; Treeviriyanupab, Patcharapong; Zhang, Chun-Mei; Yin, Zhen-Qiang; Chen, Wei; Han, Zheng-Fu

    2015-01-01

    In a quantum key distribution (QKD) system, the error rate needs to be estimated for determining the joint probability distribution between legitimate parties, and for improving the performance of key reconciliation. We propose an efficient error estimation scheme for QKD, which is called parity comparison method (PCM). In the proposed method, the parity of a group of sifted keys is practically analysed to estimate the quantum bit error rate instead of using the traditional key sampling. From the simulation results, the proposed method evidently improves the accuracy and decreases revealed information in most realistic application situations. Project supported by the National Basic Research Program of China (Grant Nos.2011CBA00200 and 2011CB921200) and the National Natural Science Foundation of China (Grant Nos.61101137, 61201239, and 61205118).

  16. Low-Bit Rate Feedback Strategies for Iterative IA-Precoded MIMO-OFDM-Based Systems

    Science.gov (United States)

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge. PMID:24678274

  17. Low-Bit Rate Feedback Strategies for Iterative IA-Precoded MIMO-OFDM-Based Systems

    Directory of Open Access Journals (Sweden)

    Sara Teodoro

    2014-01-01

    Full Text Available Interference alignment (IA is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT to the base station (BS, which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge.

  18. Hybrid Data Hiding Scheme Using Right-Most Digit Replacement and Adaptive Least Significant Bit for Digital Images

    Directory of Open Access Journals (Sweden)

    Mehdi Hussain

    2016-05-01

    Full Text Available The goal of image steganographic methods considers three main key issues: high embedding capacity, good visual symmetry/quality, and security. In this paper, a hybrid data hiding method combining the right-most digit replacement (RMDR with an adaptive least significant bit (ALSB is proposed to provide not only high embedding capacity but also maintain a good visual symmetry. The cover-image is divided into lower texture (symmetry patterns and higher texture (asymmetry patterns areas and these textures determine the selection of RMDR and ALSB methods, respectively, according to pixel symmetry. This paper has three major contributions. First, the proposed hybrid method enhanced the embedding capacity due to efficient ALSB utilization in the higher texture areas of cover images. Second, the proposed hybrid method maintains the high visual quality because RMDR has the closest selection process to generate the symmetry between stego and cover pixels. Finally, the proposed hybrid method is secure against statistical regular or singular (RS steganalysis and pixel difference histogram steganalysis because RMDR is capable of evading the risk of RS detection attacks due to pixel digits replacement instead of bits. Extensive experimental tests (over 1500+ cover images are conducted with recent least significant bit (LSB-based hybrid methods and it is demonstrated that the proposed hybrid method has a high embedding capacity (800,019 bits while maintaining good visual symmetry (39.00% peak signal-to-noise ratio (PSNR.

  19. BETTER FINGERPRINT IMAGE COMPRESSION AT LOWER BIT-RATES: AN APPROACH USING MULTIWAVELETS WITH OPTIMISED PREFILTER COEFFICIENTS

    Directory of Open Access Journals (Sweden)

    N R Rema

    2017-08-01

    Full Text Available In this paper, a multiwavelet based fingerprint compression technique using set partitioning in hierarchical trees (SPIHT algorithm with optimised prefilter coefficients is proposed. While wavelet based progressive compression techniques give a blurred image at lower bit rates due to lack of high frequency information, multiwavelets can be used efficiently to represent high frequency information. SA4 (Symmetric Antisymmetric multiwavelet when combined with SPIHT reduces the number of nodes during initialization to 1/4th compared to SPIHT with wavelet. This reduction in nodes leads to improvement in PSNR at lower bit rates. The PSNR can be further improved by optimizing the prefilter coefficients. In this work genetic algorithm (GA is used for optimizing prefilter coefficients. Using the proposed technique, there is a considerable improvement in PSNR at lower bit rates, compared to existing techniques in literature. An overall average improvement of 4.23dB and 2.52dB for bit rates in between 0.01 to 1 has been achieved for the images in the databases FVC 2000 DB1 and FVC 2002 DB3 respectively. The quality of the reconstructed image is better even at higher compression ratios like 80:1 and 100:1. The level of decomposition required for a multiwavelet is lesser compared to a wavelet.

  20. Architecture of 32 bit CISC (Complex Instruction Set Computer) microprocessors

    International Nuclear Information System (INIS)

    Jove, T.M.; Ayguade, E.; Valero, M.

    1988-01-01

    In this paper we describe the main topics about the architecture of the best known 32-bit CISC microprocessors; i80386, MC68000 family, NS32000 series and Z80000. We focus on the high level languages support, operating system design facilities, memory management, techniques to speed up the overall performance and program debugging facilities. (Author)

  1. 2015 Florida Panhandle RCD30 4-Band 8 Bit Imagery

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These files contain imagery data collected with an RCD30 camera as 8-bit RGBN TIFF images. Imagery was required 1000m seaward of the land/water interface or to laser...

  2. 2015 Southwest Florida RCD30 4-Band 8 Bit Imagery

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These files contain imagery data collected with an RCD30 camera as 8-bit RGBN TIFF images. Imagery was required 1000m seaward of the land/water interface or to laser...

  3. Some observations on the Bit-Search Generator

    OpenAIRE

    Mitchell, Chris J.

    2005-01-01

    In this short note an alternative definition of the Bit-Search Generator (BSG) is provided. This leads to a discussion of both the security of the BSG and ways in which it might be modified to either improve its rate or increase its security.

  4. Different Mass Processing Services in a Bit Repository

    DEFF Research Database (Denmark)

    Jurik, Bolette; Zierau, Eld

    2011-01-01

    This paper investigates how a general bit repository mass processing service using different programming models and platforms can be specified. Such a service is needed in large data archives, especially libraries, where different ways of doing mass processing is needed for different digital...

  5. Steganography forensics method for detecting least significant bit replacement attack

    Science.gov (United States)

    Wang, Xiaofeng; Wei, Chengcheng; Han, Xiao

    2015-01-01

    We present an image forensics method to detect least significant bit replacement steganography attack. The proposed method provides fine-grained forensics features by using the hierarchical structure that combines pixels correlation and bit-planes correlation. This is achieved via bit-plane decomposition and difference matrices between the least significant bit-plane and each one of the others. Generated forensics features provide the susceptibility (changeability) that will be drastically altered when the cover image is embedded with data to form a stego image. We developed a statistical model based on the forensics features and used least square support vector machine as a classifier to distinguish stego images from cover images. Experimental results show that the proposed method provides the following advantages. (1) The detection rate is noticeably higher than that of some existing methods. (2) It has the expected stability. (3) It is robust for content-preserving manipulations, such as JPEG compression, adding noise, filtering, etc. (4) The proposed method provides satisfactory generalization capability.

  6. An efficient parallel pseudorandom bit generator based on an ...

    Indian Academy of Sciences (India)

    complicated floating-point computation and multiplication computation transform into simple shift bit operations are ..... Based on the asymmetric CML combining some simple digital algebraic operations, a new PRBG was ... are identical, they are perturbed by the transformation x0(i) = x0(1) + 10000 × i, i = 2, 3, ..., N. 622.

  7. "Material interactions": from atoms & bits to entangled practices

    DEFF Research Database (Denmark)

    Vallgårda, Anna

    and intellectually stimulating panel moderated by Prof. Mikael Wiberg consisting of a number of scholars with a well-developed view on digital materialities to fuel a discussion on material interactions - from atoms & bits to entangled practices. These scholars include: Prof. Hiroshi Ishii, Prof. Paul Dourish...

  8. Quantum measurement and entanglement of spin quantum bits in diamond

    NARCIS (Netherlands)

    Pfaff, W.

    2013-01-01

    This thesis presents a set of experiments that explore the possible realisation of a macroscopic quantum network based on solid-state quantum bits. Such a quantum network would allow for studying quantum mechanics on large scales (meters, or even kilometers), and can open new possibilities for

  9. Designing the optimal bit: balancing energetic cost, speed and reliability.

    Science.gov (United States)

    Deshpande, Abhishek; Gopalkrishnan, Manoj; Ouldridge, Thomas E; Jones, Nick S

    2017-08-01

    We consider the challenge of operating a reliable bit that can be rapidly erased. We find that both erasing and reliability times are non-monotonic in the underlying friction, leading to a trade-off between erasing speed and bit reliability. Fast erasure is possible at the expense of low reliability at moderate friction, and high reliability comes at the expense of slow erasure in the underdamped and overdamped limits. Within a given class of bit parameters and control strategies, we define 'optimal' designs of bits that meet the desired reliability and erasing time requirements with the lowest operational work cost. We find that optimal designs always saturate the bound on the erasing time requirement, but can exceed the required reliability time if critically damped. The non-trivial geometry of the reliability and erasing time scales allows us to exclude large regions of parameter space as suboptimal. We find that optimal designs are either critically damped or close to critical damping under the erasing procedure.

  10. Radiation hardened COTS-based 32-bit microprocessor

    International Nuclear Information System (INIS)

    Haddad, N.; Brown, R.; Cronauer, T.; Phan, H.

    1999-01-01

    A high performance radiation hardened 32-bit RISC microprocessor based upon a commercial single chip CPU has been developed. This paper presents the features of radiation hardened microprocessor, the methods used to radiation harden this device, the results of radiation testing, and shows that the RAD6000 is well-suited for the vast majority of space applications. (authors)

  11. Analysis of bit-rock interaction during stick-slip vibrations using PDC cutting force model

    Energy Technology Data Exchange (ETDEWEB)

    Patil, P.A.; Teodoriu, C. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany). ITE

    2013-08-01

    Drillstring vibration is one of the limiting factors maximizing the drilling performance and also causes premature failure of drillstring components. Polycrystalline diamond compact (PDC) bit enhances the overall drilling performance giving the best rate of penetrations with less cost per foot but the PDC bits are more susceptible to the stick slip phenomena which results in high fluctuations of bit rotational speed. Based on the torsional drillstring model developed using Matlab/Simulink for analyzing the parametric influence on stick-slip vibrations due to drilling parameters and drillstring properties, the study of relations between weight on bit, torque on bit, bit speed, rate of penetration and friction coefficient have been analyzed. While drilling with the PDC bits, the bit-rock interaction has been characterized by cutting forces and the frictional forces. The torque on bit and the weight on bit have both the cutting component and the frictional component when resolved in horizontal and vertical direction. The paper considers that the bit is undergoing stick-slip vibrations while analyzing the bit-rock interaction of the PDC bit. The Matlab/Simulink bit-rock interaction model has been developed which gives the average cutting torque, T{sub c}, and friction torque, T{sub f}, values on cutters as well as corresponding average weight transferred by the cutting face, W{sub c}, and the wear flat face, W{sub f}, of the cutters value due to friction.

  12. Golden Ratio

    Indian Academy of Sciences (India)

    of mathematical biology. Our attraction to another body increases if the body is sym- metrical and in proportion. If a face or a structure is in pro- ... his practice of oral and maxillofacial surgery, and he developed a mask using the concept of golden ratio. The mask is called the. Marquardt beauty mask (Figure 1) [1]. Keywords.

  13. Measurement of definite integral product of two more signals using two-bit ADCs

    Directory of Open Access Journals (Sweden)

    Ličina Boris

    2012-01-01

    Full Text Available This paper presents the two-bit stochastic converter theory of operation. This converter could be used for the precise measurement of the effective (root-mean-square value of voltage, current, electric power or energy and thus, could be applicable to grid measurements. The key contribution of this paper is the theoretical derivation of error limits when measuring signals using the stochastic method. Standard deviation of measured value over a specified measuring period is defined as an error. When finding expressions for the measured quantity and its error, time is treated as an independent uniform random variable; therefore, probability theory and statistical theory of samples can be applied. This condition is necessary because the presented problem is highly nonlinear and stochastic and thus, cannot be solved by the linear theory of discrete signals and systems, or by the theory of random processes. The presented solution is generalized in order to include the measurements of the definite integral of the product of a finite number of signals.

  14. Modeling of alpha-particle-induced soft error rate in DRAM

    International Nuclear Information System (INIS)

    Shin, H.

    1999-01-01

    Alpha-particle-induced soft error in 256M DRAM was numerically investigated. A unified model for alpha-particle-induced charge collection and a soft-error-rate simulator (SERS) was developed. The author investigated the soft error rate of 256M DRAM and identified the bit-bar mode as one of dominant modes for soft error. In addition, for the first time, it was found that trench-oxide depth has a significant influence on soft error rate, and it should be determined by the tradeoff between soft error rate and cell-to-cell isolation characteristics

  15. Realization of three-qubit quantum error correction with superconducting circuits.

    Science.gov (United States)

    Reed, M D; DiCarlo, L; Nigg, S E; Sun, L; Frunzio, L; Girvin, S M; Schoelkopf, R J

    2012-02-01

    Quantum computers could be used to solve certain problems exponentially faster than classical computers, but are challenging to build because of their increased susceptibility to errors. However, it is possible to detect and correct errors without destroying coherence, by using quantum error correcting codes. The simplest of these are three-quantum-bit (three-qubit) codes, which map a one-qubit state to an entangled three-qubit state; they can correct any single phase-flip or bit-flip error on one of the three qubits, depending on the code used. Here we demonstrate such phase- and bit-flip error correcting codes in a superconducting circuit. We encode a quantum state, induce errors on the qubits and decode the error syndrome--a quantum state indicating which error has occurred--by reversing the encoding process. This syndrome is then used as the input to a three-qubit gate that corrects the primary qubit if it was flipped. As the code can recover from a single error on any qubit, the fidelity of this process should decrease only quadratically with error probability. We implement the correcting three-qubit gate (known as a conditional-conditional NOT, or Toffoli, gate) in 63 nanoseconds, using an interaction with the third excited state of a single qubit. We find 85 ± 1 per cent fidelity to the expected classical action of this gate, and 78 ± 1 per cent fidelity to the ideal quantum process matrix. Using this gate, we perform a single pass of both quantum bit- and phase-flip error correction and demonstrate the predicted first-order insensitivity to errors. Concatenation of these two codes in a nine-qubit device would correct arbitrary single-qubit errors. In combination with recent advances in superconducting qubit coherence times, this could lead to scalable quantum technology.

  16. Head and bit patterned media optimization at areal densities of 2.5 Tbit/in2 and beyond

    International Nuclear Information System (INIS)

    Bashir, M.A.; Schrefl, T.; Dean, J.; Goncharov, A.; Hrkac, G.; Allwood, D.A.; Suess, D.

    2012-01-01

    Global optimization of writing head is performed using micromagnetics and surrogate optimization. The shape of the pole tip is optimized for bit patterned, exchange spring recording media. The media characteristics define the effective write field and the threshold values for the head field that acts at islands in the adjacent track. Once the required head field characteristics are defined, the pole tip geometry is optimized in order to achieve a high gradient of the effective write field while keeping the write field at the adjacent track below a given value. We computed the write error rate and the adjacent track erasure for different maximum anisotropy in the multilayer, graded media. The results show a linear trade off between the error rate and the number of passes before erasure. For optimal head media combinations we found a bit error rate of 10 -6 with 10 8 pass lines before erasure at 2.5 Tbit/in 2 . - Research Highlights: → Global optimization of writing head is performed using micromagnetics and surrogate optimization. → A method is provided to optimize the pole tip shape while maintaining the head field that acts in the adjacent tracks. → Patterned media structures providing an area density of 2.5 Tbit/in 2 are discussed as a case study. → Media reliability is studied, while taking into account, the magnetostatic field interactions from neighbouring islands and adjacent track erasure under the influence of head field.

  17. Multihop Relaying over IM/DD FSO Systems with Pointing Errors

    KAUST Repository

    Zedini, Emna

    2015-10-19

    In this paper, the end-to-end performance of a multihop free-space optical system with amplify-and-forward channelstate- information-assisted or fixed-gain relays using intensity modulation with direct detection technique over Gamma-Gamma turbulence fading with pointing error impairments is studied. More specifically, novel closed-form results for the probability density function and the cumulative distribution function of the end-to-end signal-to-noise ratio (SNR) are derived in terms of the Fox’s H function. Based on these formulas, closed-form bounds for the outage probability, the average bit-error rate (BER) of on-off keying modulation scheme, the moments, and the ergodic capacity are presented. Furthermore, using the momentsbased approach, tight asymptotic approximations at high and low average SNR regimes are derived for the ergodic capacity in terms of simple elementary functions. The obtained results indicate that the overall system performance degrades with an increase of the number of hops. The effects of the atmospheric turbulence conditions and the pointing error are also quantified. All the analytical results are verified via computer-based Monte- Carlo simulations.

  18. Error Control Techniques for Satellite and Space Communications

    Science.gov (United States)

    Costello, Daniel J., Jr.

    1996-01-01

    In this report, we present the results of our recent work on turbo coding in two formats. Appendix A includes the overheads of a talk that has been given at four different locations over the last eight months. This presentation has received much favorable comment from the research community and has resulted in the full-length paper included as Appendix B, 'A Distance Spectrum Interpretation of Turbo Codes'. Turbo codes use a parallel concatenation of rate 1/2 convolutional encoders combined with iterative maximum a posteriori probability (MAP) decoding to achieve a bit error rate (BER) of 10(exp -5) at a signal-to-noise ratio (SNR) of only 0.7 dB. The channel capacity for a rate 1/2 code with binary phase shift-keyed modulation on the AWGN (additive white Gaussian noise) channel is 0 dB, and thus the Turbo coding scheme comes within 0.7 DB of capacity at a BER of 10(exp -5).

  19. Effects of error feedback on a nonlinear bistable system with stochastic resonance

    International Nuclear Information System (INIS)

    Li Jian-Long; Zhou Hui

    2012-01-01

    In this paper, we discuss the effects of error feedback on the output of a nonlinear bistable system with stochastic resonance. The bit error rate is employed to quantify the performance of the system. The theoretical analysis and the numerical simulation are presented. By investigating the performances of the nonlinear systems with different strengths of error feedback, we argue that the presented system may provide guidance for practical nonlinear signal processing

  20. Multi-bit wavelength coding phase-shift-keying optical steganography based on amplified spontaneous emission noise

    Science.gov (United States)

    Wang, Cheng; Wang, Hongxiang; Ji, Yuefeng

    2018-01-01

    In this paper, a multi-bit wavelength coding phase-shift-keying (PSK) optical steganography method is proposed based on amplified spontaneous emission noise and wavelength selection switch. In this scheme, the assignment codes and the delay length differences provide a large two-dimensional key space. A 2-bit wavelength coding PSK system is simulated to show the efficiency of our proposed method. The simulated results demonstrate that the stealth signal after encoded and modulated is well-hidden in both time and spectral domains, under the public channel and noise existing in the system. Besides, even the principle of this scheme and the existence of stealth channel are known to the eavesdropper, the probability of recovering the stealth data is less than 0.02 if the key is unknown. Thus it can protect the security of stealth channel more effectively. Furthermore, the stealth channel will results in 0.48 dB power penalty to the public channel at 1 × 10-9 bit error rate, and the public channel will have no influence on the receiving of the stealth channel.

  1. High-bit rate ultra-compact light routing with mode-selective on-chip nanoantennas.

    Science.gov (United States)

    Guo, Rui; Decker, Manuel; Setzpfandt, Frank; Gai, Xin; Choi, Duk-Yong; Kiselev, Roman; Chipouline, Arkadi; Staude, Isabelle; Pertsch, Thomas; Neshev, Dragomir N; Kivshar, Yuri S

    2017-07-01

    Optical nanoantennas provide a promising pathway toward advanced manipulation of light waves, such as directional scattering, polarization conversion, and fluorescence enhancement. Although these functionalities were mainly studied for nanoantennas in free space or on homogeneous substrates, their integration with optical waveguides offers an important "wired" connection to other functional optical components. Taking advantage of the nanoantenna's versatility and unrivaled compactness, their imprinting onto optical waveguides would enable a marked enhancement of design freedom and integration density for optical on-chip devices. Several examples of this concept have been demonstrated recently. However, the important question of whether nanoantennas can fulfill functionalities for high-bit rate signal transmission without degradation, which is the core purpose of many integrated optical applications, has not yet been experimentally investigated. We introduce and investigate directional, polarization-selective, and mode-selective on-chip nanoantennas integrated with a silicon rib waveguide. We demonstrate that these nanoantennas can separate optical signals with different polarizations by coupling the different polarizations of light vertically to different waveguide modes propagating into opposite directions. As the central result of this work, we show the suitability of this concept for the control of optical signals with ASK (amplitude-shift keying) NRZ (nonreturn to zero) modulation [10 Gigabit/s (Gb/s)] without significant bit error rate impairments. Our results demonstrate that waveguide-integrated nanoantennas have the potential to be used as ultra-compact polarization-demultiplexing on-chip devices for high-bit rate telecommunication applications.

  2. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  3. Optical Switching and Bit Rates of 40 Gbit/s and above

    DEFF Research Database (Denmark)

    Ackaert, A.; Demester, P.; O'Mahony, M.

    2003-01-01

    Optical switching in WDM networks introduces additional aspects to the choice of single channel bit rates compared to WDM transmission systems. The mutual impact of optical switching and bit rates of 40 Gbps and above is discussed....

  4. Small digital recording head has parallel bit channels, minimizes cross talk

    Science.gov (United States)

    Eller, E. E.; Laue, E. G.

    1964-01-01

    A small digital recording head consists of closely spaced parallel wires, imbedded in a ferrite block to concentrate the magnetic flux. Parallel-recorded information bits are converted into serial bits on moving magnetic tape and cross talk is suppressed.

  5. Efficient biased random bit generation for parallel processing

    Energy Technology Data Exchange (ETDEWEB)

    Slone, Dale M. [Univ. of California, Davis, CA (United States)

    1994-09-28

    A lattice gas automaton was implemented on a massively parallel machine (the BBN TC2000) and a vector supercomputer (the CRAY C90). The automaton models Burgers equation ρt + ρρx = vρxx in 1 dimension. The lattice gas evolves by advecting and colliding pseudo-particles on a 1-dimensional, periodic grid. The specific rules for colliding particles are stochastic in nature and require the generation of many billions of random numbers to create the random bits necessary for the lattice gas. The goal of the thesis was to speed up the process of generating the random bits and thereby lessen the computational bottleneck of the automaton.

  6. Modular trigger processing The GCT muon and quiet bit system

    CERN Document Server

    Stettler, Matthew; Hansen, Magnus; Iles, Gregory; Jones, John; PH-EP

    2007-01-01

    The CMS Global Calorimeter Trigger system's HCAL Muon and Quiet bit reformatting function is being implemented with a novel processing architecture. This architecture utilizes micro TCA, a modern modular communications standard based on high speed serial links, to implement a processing matrix. This matrix is configurable in both logical functionality and data flow, allowing far greater flexibility than current trigger processing systems. In addition, the modular nature of this architecture allows flexibility in scale unmatched by traditional approaches. The Muon and Quiet bit system consists of two major components, a custom micro TCA backplane and processing module. These components are based on Xilinx Virtex5 and Mindspeed crosspoint switch devices, bringing together state of the art FPGA based processing and Telcom switching technologies.

  7. Wear Detection of Drill Bit by Image-based Technique

    Science.gov (United States)

    Sukeri, Maziyah; Zulhilmi Paiz Ismadi, Mohd; Rahim Othman, Abdul; Kamaruddin, Shahrul

    2018-03-01

    Image processing for computer vision function plays an essential aspect in the manufacturing industries for the tool condition monitoring. This study proposes a dependable direct measurement method to measure the tool wear using image-based analysis. Segmentation and thresholding technique were used as the means to filter and convert the colour image to binary datasets. Then, the edge detection method was applied to characterize the edge of the drill bit. By using cross-correlation method, the edges of original and worn drill bits were correlated to each other. Cross-correlation graphs were able to detect the difference of the worn edge despite small difference between the graphs. Future development will focus on quantifying the worn profile as well as enhancing the sensitivity of the technique.

  8. Comodulation masking release in bit-rate reduction systems

    DEFF Research Database (Denmark)

    Vestergaard, Martin David; Rasmussen, Karsten Bo; Poulsen, Torben

    1999-01-01

    It has been suggested that the level dependence of the upper masking slope be utilized in perceptual models in bit-rate reduction systems. However, comodulation masking release (CMR) phenomena lead to a reduction of the masking effect when a masker and a probe signal are amplitude modulated...... with the same frequency. In bit-rate reduction systems the masker would be the audio signal and the probe signal would represent the quantization noise. Masking curves have been determined for sinusoids and 1-Bark-wide noise maskers in order to investigate the risk of CMR, when quantizing depths are fixed...... in accordance with psycho-acoustical principles. Masker frequencies of 500 Hz, 1 kHz, and 2 kHz have been investigated, and the masking of pure tone probes has been determined in the first four 1/3 octaves above the masker. Modulation frequencies between 6 and 20 Hz were used with a modulation depth of 0...

  9. Comodulation masking release in bit-rate reduction systems

    DEFF Research Database (Denmark)

    Vestergaard, Martin D.; Rasmussen, Karsten Bo; Poulsen, Torben

    1999-01-01

    It has been suggested that the level dependence of the upper masking slopebe utilised in perceptual models in bit-rate reduction systems. However,comodulation masking release (CMR) phenomena lead to a reduction of themasking effect when a masker and a probe signal are amplitude modulated withthe...... same frequency. In bit-rate reduction systems the masker would be theaudio signal and the probe signal would represent the quantization noise.Masking curves have been determined for sinusoids and 1-Bark-wide noisemaskers in order to investigate the risk of CMR, when quantizing depths arefixed...... in accordance with psycho-acoustical principles. Masker frequencies of500Hz, 1kHz and 2kHz have been investigated, and the masking of pure toneprobes has been determined in the first four 1/3 octaves above the masker.Modulation frequencies between 6Hz and 20Hz were used with a modulationdepth of 0.75. CMR of up...

  10. The design method of diamond bit in hardest formation

    International Nuclear Information System (INIS)

    Tian Long

    2010-01-01

    In the case of diamond bit drill* 1, * 2 of formation drillability that is the hardest formation,Fine, soft matrix, the low concentration of diamond dril should be chosen by Traditional design theory .Here describes a basic design with this completely different method. Proved through practice, Good performance of this drill, the drilling in extremely hard when the only limitation is high and life expectancy has increased a lot, have a good ground for such adaptability. (authors)

  11. Feasibility Study of 8-Bit Microcontroller Applications for Ethernet

    OpenAIRE

    Lech Gulbinovič

    2011-01-01

    Feasibility study of 8-bit microcontroller applications for Ethernet is presented. Designed device is based on ATmega32 microcontroller and 10 Mbps Ethernet controller ENC28J60. Device is simulated as mass queuing theoretical model with ticket booking counter. Practical explorations are accomplished and characteristics are determined. Practical results are compared to theoretical ones. Program code and device packet processing speed optimization are discussed. Microcontroller packet processin...

  12. SOLAR TRACKER CERDAS DAN MURAH BERBASIS MIKROKONTROLER 8 BIT ATMega8535

    OpenAIRE

    I Wayan Sutaya; Ketut Udy Ariawan

    2016-01-01

    prototipe produk solar tracker cerdas berbasis mikrokontroler AVR 8 bit. Solar tracker ini memasukkan filter digital IIR (Infinite Impulse Response) pada bagian program. Memprogram filter ini membutuhkan perkalian 32 bit sedangkan prosesor yang tersedia pada mikrokontroler yang dipakai adalah 8 bit. Proses perkalian ini hanya bisa dilakukan pada mikrokontroler 8 bit dengan menggunakan bahasa assembly yang merupakan bahasa level hardware. Solar tracker cerdas yang menggunakan mikrokontroler 8 ...

  13. A short introduction to bit-string physics

    International Nuclear Information System (INIS)

    Noyes, H.P.

    1997-06-01

    This paper starts with a personal memoir of how some significant ideas arose and events took place during the period from 1972, when the author first encountered Ted Bastin, to 1979, when the author proposed the foundation of ANPA. He then discusses program universe, the fine structure paper and its rejection, the quantitative results up to ANPA 17 and take a new look at the handy-dandy formula. Following the historical material is a first pass at establishing new foundations for bit-string physics. An abstract model for a laboratory notebook and a historical record are developed, culminating in the bit-string representation. The author set up a tic-toc laboratory with two synchronized clocks and shows how this can be used to analyze arbitrary incoming data. This allows him to discuss (briefly) finite and discrete Lorentz transformations, commutation relations, and scattering theory. Earlier work on conservation laws in 3- and 4-events and the free space Dirac and Maxwell equations is cited. The paper concludes with a discussion of the quantum gravity problem from his point of view and speculations about how a bit-string theory of strong, electromagnetic, weak and gravitational unification could take shape

  14. Biometric Quantization through Detection Rate Optimized Bit Allocation

    Directory of Open Access Journals (Sweden)

    C. Chen

    2009-01-01

    Full Text Available Extracting binary strings from real-valued biometric templates is a fundamental step in many biometric template protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Previous work has been focusing on the design of optimal quantization and coding for each single feature component, yet the binary string—concatenation of all coded feature components—is not optimal. In this paper, we present a detection rate optimized bit allocation (DROBA principle, which assigns more bits to discriminative features and fewer bits to nondiscriminative features. We further propose a dynamic programming (DP approach and a greedy search (GS approach to achieve DROBA. Experiments of DROBA on the FVC2000 fingerprint database and the FRGC face database show good performances. As a universal method, DROBA is applicable to arbitrary biometric modalities, such as fingerprint texture, iris, signature, and face. DROBA will bring significant benefits not only to the template protection systems but also to the systems with fast matching requirements or constrained storage capability.

  15. Biometric Quantization through Detection Rate Optimized Bit Allocation

    Science.gov (United States)

    Chen, C.; Veldhuis, R. N. J.; Kevenaar, T. A. M.; Akkermans, A. H. M.

    2009-12-01

    Extracting binary strings from real-valued biometric templates is a fundamental step in many biometric template protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Previous work has been focusing on the design of optimal quantization and coding for each single feature component, yet the binary string—concatenation of all coded feature components—is not optimal. In this paper, we present a detection rate optimized bit allocation (DROBA) principle, which assigns more bits to discriminative features and fewer bits to nondiscriminative features. We further propose a dynamic programming (DP) approach and a greedy search (GS) approach to achieve DROBA. Experiments of DROBA on the FVC2000 fingerprint database and the FRGC face database show good performances. As a universal method, DROBA is applicable to arbitrary biometric modalities, such as fingerprint texture, iris, signature, and face. DROBA will bring significant benefits not only to the template protection systems but also to the systems with fast matching requirements or constrained storage capability.

  16. A single-channel 10-bit 160 MS/s SAR ADC in 65 nm CMOS

    Science.gov (United States)

    Yuxiao, Lu; Lu, Sun; Zhe, Li; Jianjun, Zhou

    2014-04-01

    This paper demonstrates a single-channel 10-bit 160 MS/s successive-approximation-register (SAR) analog-to-digital converter (ADC) in 65 nm CMOS process with a 1.2 V supply voltage. To achieve high speed, a new window-opening logic based on the asynchronous SAR algorithm is proposed to minimize the logic delay, and a partial set-and-down DAC with binary redundancy bits is presented to reduce the dynamic comparator offset and accelerate the DAC settling. Besides, a new bootstrapped switch with a pre-charge phase is adopted in the track and hold circuits to increase speed and reduce area. The presented ADC achieves 52.9 dB signal-to-noise distortion ratio and 65 dB spurious-free dynamic range measured with a 30 MHz input signal at 160 MHz clock. The power consumption is 9.5 mW and a core die area of 250 × 200 μm2 is occupied.

  17. Simulasi Perubahan Energi Per Bit Dan Derau Terhadap Jumlah Kanal Dan Cakupan WCDMA

    Directory of Open Access Journals (Sweden)

    Alfin Hikmaturokhman

    2010-11-01

    Full Text Available Eb/No parameter is the measure of signal to noise ratio for a digital communication system, it is measured at the input to the receiver and is used as the basic measure of how strong the signal is, or in other words Eb/No indicates the fluctuation of received signal strength in the receiver. Eb/No is affected by several factors, such as speed of mobile station, propagation environment and bit rate. The variations of Eb/No value will affect to the number of offered channel and coverage in WCDMA. The impact of the variation of Eb/No value could be recognized in the result of the calculations. The purpose of this research is to build simulation models by using Delphi to view and analyze the influence of Eb/No of total channels and WCDMA coverage. The results from simulation analysis showed that the larger of Eb/No and bit rate used, the number of channels on offer will be smaller and the value of BS is low sensitivity, which means loads of traffic will also offer little that would cause the quality to be better systems and transmit power MS becomes more lower in order to maintain the value of Eb/No to avoid the drop call.

  18. A High Resolution Switched Capacitor 1bit Sigma-Delta Modulator for Low-Voltage/Low-Power Applications

    DEFF Research Database (Denmark)

    Furst, Claus Efdmann

    1996-01-01

    A high resolution 1bit Sigma-Delta modulator for low power/low voltage applications is presented. The modulator operates at a supply of 1-1.5V, the current drain is 0.1mA. The maximum resolution is 87dB equivalent to 14 bits of resolution. This is achieved with a signal-band of 5kHz, over......-sampling ratio (OSR) of 128 and a sampling frequency of 1.28MHz. The very low power consumption is achieved by using a new type of efficient class AB amplifiers in a fully differential configuration. The modulator is implemented in a 0.7 micron n-well CMOS technology. Optimisation details concerning modulator...

  19. Surgical drill system and surgical drill bit to be used therein

    NARCIS (Netherlands)

    Margallo Balbas, E.; Wieringa, P.A.; French, P.J.; Lee, R.A.; Breedveld, P.

    2007-01-01

    Surgical drill system comprising a mechanical drill bit and means for imaging the vicinity of the drill bit tip, said means comprising: at least one optical fiber having a distal end and a proximal end, said distal end being located adjacent said drill bit tip, an optical processing unit, said

  20. An Error Analysis on TFL Learners’ Writings

    Directory of Open Access Journals (Sweden)

    Arif ÇERÇİ

    2016-12-01

    Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors

  1. PERFORMANCE OF OPPORTUNISTIC SPECTRUM ACCESS WITH SENSING ERROR IN COGNITIVE RADIO AD HOC NETWORKS

    Directory of Open Access Journals (Sweden)

    N. ARMI

    2012-04-01

    Full Text Available Sensing in opportunistic spectrum access (OSA has a responsibility to detect the available channel by performing binary hypothesis as busy or idle states. If channel is busy, secondary user (SU cannot access and refrain from data transmission. SU is allowed to access when primary user (PU does not use it (idle states. However, channel is sensed on imperfect communication link. Fading, noise and any obstacles existed can cause sensing errors in PU signal detection. False alarm detects idle states as a busy channel while miss-identification detects busy states as an idle channel. False detection makes SU refrain from transmission and reduces number of bits transmitted. On the other hand, miss-identification causes SU collide to PU transmission. This paper study the performance of OSA based on the greedy approach with sensing errors by the restriction of maximum collision probability allowed (collision threshold by PU network. The throughput of SU and spectrum capacity metric is used to evaluate OSA performance and make comparisons to those ones without sensing error as function of number of slot based on the greedy approach. The relations between throughput and signal to noise ratio (SNR with different collision probability as well as false detection with different SNR are presented. According to the obtained results show that CR users can gain the reward from the previous slot for both of with and without sensing errors. It is indicated by the throughput improvement as slot number increases. However, sensing on imperfect channel with sensing errors can degrade the throughput performance. Subsequently, the throughput of SU and spectrum capacity improves by increasing maximum collision probability allowed by PU network as well. Due to frequent collision with PU, the throughput of SU and spectrum capacity decreases at certain value of collision threshold. Computer simulation is used to evaluate and validate these works.

  2. Error-free 5.1 Tbit/s data generation on a single-wavelength channel using a 1.28 Tbaud symbol rate

    DEFF Research Database (Denmark)

    Mulvad, Hans Christian Hansen; Galili, Michael; Oxenløwe, Leif Katsuo

    2009-01-01

    We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER......We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER...

  3. Effects of Error Messages on a Student's Ability to Understand and Fix Programming Errors

    Science.gov (United States)

    Beejady Murthy Kadekar, Harsha Kadekar

    Assemblers and compilers provide feedback to a programmer in the form of error messages. These error messages become input to the debugging model of the programmer. For the programmer to fix an error, they should first locate the error in the program, understand what is causing that error, and finally resolve that error. Error messages play an important role in all three stages of fixing of errors. This thesis studies the effects of error messages in the context of teaching programming. Given an error message, this work investigates how it effects student's way of 1) understanding the error, and 2) fixing the error. As part of the study, three error message types were developed--Default, Link and Example, to better understand the effects of error messages. The Default type provides an assembler-centric single line error message, the Link type provides a program-centric detailed error description with a hyperlink for more information, and the Example type provides a program centric detailed error description with a relevant example. All these error message types were developed for assembly language programming. A think aloud programming exercise was conducted as part of the study to capture the student programmer's knowledge model. Different codes were developed to analyze the data collected as part of think aloud exercise. After transcribing, coding, and analyzing the data, it was found that the Link type of error message helped to fix the error in less time and with fewer steps. Among the three types, the Link type of error message also resulted in a significantly higher ratio of correct to incorrect steps taken by the programmer to fix the error.

  4. A 6-bit 4 GS/s pseudo-thermometer segmented CMOS DAC

    Science.gov (United States)

    Yijun, Song; Wenyuan, Li

    2014-06-01

    A 6-bit 4 GS/s, high-speed and power-efficient DAC for ultra-high-speed transceivers in 60 GHz band millimeter wave technology is presented. A novel pseudo-thermometer architecture is proposed to realize a good compromise between the fast conversion speed and the chip area. Symmetrical and compact floor planning and layout techniques including tree-like routing, cross-quading and common-centroid method are adopted to guarantee the chip is fully functional up to near-Nyquist frequency in a standard 0.18 μm CMOS process. Post simulation results corroborate the feasibility of the designed DAC, which canperform good static and dynamic linearity without calibration. DNL errors and INL errors can be controlled within ±0.28 LSB and ±0.26 LSB, respectively. SFDR at 4 GHz clock frequency for a 1.9 GHz near-Nyquist sinusoidal output signal is 40.83 dB and the power dissipation is less than 37 mW.

  5. On the feedback error compensation for adaptive modulation and coding scheme

    KAUST Repository

    Choi, Seyeong

    2011-11-25

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify the performance of two joint AMDC schemes in the presence of feedback error, in terms of the average spectral efficiency, the average number of combined paths, and the average bit error rate. The benefit of feedback error compensation with adaptive combining is also quantified. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. Copyright (c) 2011 John Wiley & Sons, Ltd.

  6. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  7. The digital agenda of virtual currencies: Can BitCoin become a global currency?

    OpenAIRE

    CIAIAN PAVEL; RAJCANIOVA MIROSLAVA; KANCS D'ARTIS

    2015-01-01

    This paper identifies and analyzes BitCoin features which may facilitate BitCoin to become a global currency, as well as characteristics which may impede the use of BitCoin as a medium of exchange, a unit of account and a store of value, and compares BitCoin with standard currencies with respect to the main functions of money. Among all analyzed BitCoin features, the extreme price volatility stands out most clearly compared to standard currencies. In order to understand the reasons for such e...

  8. Annual Report: Support Research for Development of Improved Geothermal Drill Bits

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, R.R.; Winzenried, R.W.; Jones, A.H.; Green, S.J.

    1978-07-01

    The work reported herein is a continuation of the program initiated under DOE contract E(10-1)-1546* entitled "Program to Design and Experimentally Test an Improved Geothermal Bit"; the program is now DOE Contract EG-76-C-1546*. The objective of the program has been to accelerate the commercial availability of a tolling cutter drill bit for geothermal applications. Data and experimental tests needed to develop a bit suited to the harsh thermal, abrasive, and chemical environment of the more problematic geothermal wells, including those drilled with air, have been obtained. Efforts were directed at the improvement of both the sealed (lubricated) and unsealed types of bits. The unsealed bit effort included determination of the rationale for materials selection, the selection of steels for the bit body, cutters, and bearings, the selection of tungsten carbide alloys for the friction bearing, and preliminary investigation of optimized tungsten carbide drilling inserts. Bits build** with the new materials were tested under stimulated wellbore conditions. The sealed bit effort provided for the evaluation of candidate high temperature seals and lubricants, utilizing two specially developed test apparatus which simulate the conditions found in a sealed bit operating in a geothermal wellbore. Phase I of the program was devoted largely to (1) the study of the geothermal environment and the failure mechanisms of existing geothermal drill bits, (2) the design and construction of separate facilities for testing both drill-bit seals and full-scale drill bits under simulated geothermal drilling conditions, and (3) fabrication of the MK-I research drill bits from high-temperature steels, and testing in the geothermal drill-bit test facility. The work accomplished in Phase I is reported in References 1 through 9. In Phase II, the first generation experimental bits were tested in the geothermal drill-bit test facility. Test results indicated that hardness retention at temperature

  9. Universality and clustering in 1 + 1 dimensional superstring-bit models

    International Nuclear Information System (INIS)

    Bergman, O.; Thorn, C.B.

    1996-01-01

    We construct a 1+1 dimensional superstring-bit model for D=3 Type IIB superstring. This low dimension model escapes the problem encountered in higher dimension models: (1) It possesses full Galilean supersymmetry; (2) For noninteracting Polymers of bits, the exactly soluble linear superpotential describing bit interactions is in a large universality class of superpotentials which includes ones bounded at spatial infinity; (3) The latter are used to construct a superstring-bit model with the clustering properties needed to define an S-matrix for closed polymers of superstring-bits

  10. Error Propagation in Equations for Geochemical Modeling of ...

    Indian Academy of Sciences (India)

    This paper presents error propagation equations for modeling of radiogenic isotopes during mixing of two components or end-members. These equations can be used to estimate errors on an isotopic ratio in the mixture of two components, as a function of the analytical errors or the total errors of geological field sampling ...

  11. A 10-bit 120-MS/s pipelined ADC with improved switch and layout scaling strategy

    International Nuclear Information System (INIS)

    Zhou Jia; Xu Lili; Li Fule; Wang Zhihua

    2015-01-01

    A 10 bit, 120 MS/s two-channel pipelined analog-to digital converter (ADC) is presented. The ADC is featured with improved switch by using the body effect to improve its conduction performance. A scaling down strategy is proposed to get more efficiency in the OTAs layout design. Implemented in a 0.18-μm CMOS technology, the ADC's prototype occupied an area of 2.05 × 1.83 mm 2 . With a sampling rate of 120-MS/s and an input of 4.9 MHz, the ADC achieves a spurious-free-dynamic range of 74.32 dB and signal-to-noise-and-distortion ratio of 55.34 dB, while consuming 220-mW/channel at 3-V supply. (paper)

  12. Utilization of multiple read heads for TMR prediction and correction in bit-patterned media recording

    Directory of Open Access Journals (Sweden)

    W. Busyatras

    2017-05-01

    Full Text Available This paper proposes a utilization of multiple read heads to predict and correct a track mis-registration (TMR in bit-patterned media recording (BPMR based on the readback signals. We propose to use the signal energy ratio between the upper and lower tracks from multiple read heads to estimate the TMR level. Then, a pair of two-dimensional (2D target and its corresponding 2D equalizer associated with the estimated TMR will be chosen to correct the TMR in the data detection process. Numerical results show that the proposed system can achieve a very high accuracy of TMR prediction, thus performing better than the conventional system, especially when TMR is severe.

  13. Heat Generation During Bone Drilling: A Comparison Between Industrial and Orthopaedic Drill Bits.

    Science.gov (United States)

    Hein, Christopher; Inceoglu, Serkan; Juma, David; Zuckerman, Lee

    2017-02-01

    Cortical bone drilling for preparation of screw placement is common in multiple surgical fields. The heat generated while drilling may reach thresholds high enough to cause osteonecrosis. This can compromise implant stability. Orthopaedic drill bits are several orders more expensive than their similarly sized, publicly available industrial counterparts. We hypothesize that an industrial bit will generate less heat during drilling, and the bits will not generate more heat after multiple cortical passes. We compared 4 4.0 mm orthopaedic and 1 3.97 mm industrial drill bits. Three types of each bit were drilled into porcine femoral cortices 20 times. The temperature of the bone was measured with thermocouple transducers. The heat generated during the first 5 drill cycles for each bit was compared to the last 5 cycles. These data were analyzed with analysis of covariance. The industrial drill bit generated the smallest mean increase in temperature (2.8 ± 0.29°C) P industrial bit generated less heat during drilling than its orthopaedic counterparts. The bits maintained their performance after 20 drill cycles. Consideration should be given by manufacturers to design differences that may contribute to a more efficient cutting bit. Further investigation into the reuse of these drill bits may be warranted, as our data suggest their efficiency is maintained after multiple uses.

  14. A preliminary study on the containment building integrity following BIT removal for nuclear power plant

    International Nuclear Information System (INIS)

    Jo, Jong Young; Song, Dong Soo; Byun, Choong Sub

    2008-01-01

    Boron Injection Tank(BIT) is a component of the Safety Injection System, which its sole function is to provide concentrated boric acid to the reactor coolant in order to mitigate the consequences of postulated main steamline break accidents. Although BIT plays an important role in mitigating the accident, high concentration of 20,000ppm causes valve leakage, clog of precipitation and continuous heat tracing have to be provided. For the removal of BIT, benchmarking analysis is performed between COPATTA code used in final safety analysis report and CONTEMPT code to be used this study. CONTEMPT is well compatible with COPATTA. The sensitivity study for integrity is performed for the three cases of full double ended rupture at 102% power with diesel generator failure, 3.4m 3 and 2400ppm BIT, 3.4m 3 and 0ppm BIT and no volume of BIT. The results show that the deactivation of BIT is plausible for success

  15. A 110mW, 0.04mm2, 11GS/s 9-bit interleaved DAC in 28nm FDSOI with >50dB SFDR across Nyquist

    NARCIS (Netherlands)

    Olieman, E.; Annema, Anne J.; Nauta, Bram

    2014-01-01

    A 9-bit 11GS/s current-steering (CS) digital-to-analog converter (DAC) is designed in 28nm FDSOI. The DAC uses two-times interleaving to suppress the effects of the main error mechanisms of CS DACs while its clock timing can be tuned by the back gates bias voltage of the multiplexer transistors. The

  16. An Integrated Signaling-Encryption Mechanism to Reduce Error Propagation in Wireless Communications: Performance Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL

    2015-01-01

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

  17. Power analysis data set for 4-Bit MOCLA adder.

    Science.gov (United States)

    Nehru, K

    2018-02-01

    In order to reduce the silicon area of the chip and optimize the power of arithmetic circuits, this paper proposes a low power carry look-ahead BCD (Binary Coded Decimal) adder which uses a four bit MOCLA (Multiplexer and Or gate based Carry Look Ahead Adder) that forms the basic building block. This proposed MOCLA style uses a 2 input MUX, OR gate and GDI (Gate Diffusion Input) based full adder and PG units and it is used for achieving low power in BCD adder circuits.

  18. Distribution of digital games via BitTorrent

    DEFF Research Database (Denmark)

    Drachen, Anders; Bauer, Kevin; Veitch, Robert W. D.

    2011-01-01

    distribution across game titles and game genres. This paper presents the first large-scale, open-method analysis of the distribution of digital game titles, which was conducted by monitoring the BitTorrent peer-to-peer (P2P) file-sharing protocol. The sample includes 173 games and a collection period of three......The practice of illegally copying and distributing digital games is at the heart of one of the most heated and divisive debates in the international games environment. Despite the substantial interest in game piracy, there is very little objective information available about its magnitude or its...

  19. "Material interactions": from atoms & bits to entangled practices

    DEFF Research Database (Denmark)

    Vallgårda, Anna

    This panel addresses some of the core aspects of the theme "It's the experience", for the CHI2012 conference by focusing on the materials that constitute the foundation for interaction with computers. We take a series of questions as a joint point of departure to consider the nature and character...... and intellectually stimulating panel moderated by Prof. Mikael Wiberg consisting of a number of scholars with a well-developed view on digital materialities to fuel a discussion on material interactions - from atoms & bits to entangled practices. These scholars include: Prof. Hiroshi Ishii, Prof. Paul Dourish...

  20. Extending Landauer's bound from bit erasure to arbitrary computation

    Science.gov (United States)

    Wolpert, David

    The minimal thermodynamic work required to erase a bit, known as Landauer's bound, has been extensively investigated both theoretically and experimentally. However, when viewed as a computation that maps inputs to outputs, bit erasure has a very special property: the output does not depend on the input. Existing analyses of thermodynamics of bit erasure implicitly exploit this property, and thus cannot be directly extended to analyze the computation of arbitrary input-output maps. Here we show how to extend these earlier analyses of bit erasure to analyze the thermodynamics of arbitrary computations. Doing this establishes a formal connection between the thermodynamics of computers and much of theoretical computer science. We use this extension to analyze the thermodynamics of the canonical ``general purpose computer'' considered in computer science theory: a universal Turing machine (UTM). We consider a UTM which maps input programs to output strings, where inputs are drawn from an ensemble of random binary sequences, and prove: i) The minimal work needed by a UTM to run some particular input program X and produce output Y is the Kolmogorov complexity of Y minus the log of the ``algorithmic probability'' of Y. This minimal amount of thermodynamic work has a finite upper bound, which is independent of the output Y, depending only on the details of the UTM. ii) The expected work needed by a UTM to compute some given output Y is infinite. As a corollary, the overall expected work to run a UTM is infinite. iii) The expected work needed by an arbitrary Turing machine T (not necessarily universal) to compute some given output Y can either be infinite or finite, depending on Y and the details of T. To derive these results we must combine ideas from nonequilibrium statistical physics with fundamental results from computer science, such as Levin's coding theorem and other theorems about universal computation. I would like to ackowledge the Santa Fe Institute, Grant No

  1. Power analysis data set for 4-Bit MOCLA adder

    Directory of Open Access Journals (Sweden)

    K. Nehru

    2018-02-01

    Full Text Available In order to reduce the silicon area of the chip and optimize the power of arithmetic circuits, this paper proposes a low power carry look-ahead BCD (Binary Coded Decimal adder which uses a four bit MOCLA (Multiplexer and Or gate based Carry Look Ahead Adder that forms the basic building block. This proposed MOCLA style uses a 2 input MUX, OR gate and GDI (Gate Diffusion Input based full adder and PG units and it is used for achieving low power in BCD adder circuits.

  2. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  3. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  4. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  5. Device-independent bit commitment based on the CHSH inequality

    International Nuclear Information System (INIS)

    Aharon, N; Massar, S; Pironio, S; Silman, J

    2016-01-01

    Bit commitment and coin flipping occupy a unique place in the device-independent landscape, as the only device-independent protocols thus far suggested for these tasks are reliant on tripartite GHZ correlations. Indeed, we know of no other bipartite tasks, which admit a device-independent formulation, but which are not known to be implementable using only bipartite nonlocality. Another interesting feature of these protocols is that the pseudo-telepathic nature of GHZ correlations—in contrast to the generally statistical character of nonlocal correlations, such as those arising in the violation of the CHSH inequality—is essential to their formulation and analysis. In this work, we present a device-independent bit commitment protocol based on CHSH testing, which achieves the same security as the optimal GHZ-based protocol, albeit at the price of fixing the time at which Alice reveals her commitment. The protocol is analyzed in the most general settings, where the devices are used repeatedly and may have long-term quantum memory. We also recast the protocol in a post-quantum setting where both honest and dishonest parties are restricted only by the impossibility of signaling, and find that overall the supra-quantum structure allows for greater security. (paper)

  6. Guaranteed energy-efficient bit reset in finite time.

    Science.gov (United States)

    Browne, Cormac; Garner, Andrew J P; Dahlsten, Oscar C O; Vedral, Vlatko

    2014-09-05

    Landauer's principle states that it costs at least kBTln2 of work to reset one bit in the presence of a heat bath at temperature T. The bound of kBTln2 is achieved in the unphysical infinite-time limit. Here we ask what is possible if one is restricted to finite-time protocols. We prove analytically that it is possible to reset a bit with a work cost close to kBTln2 in a finite time. We construct an explicit protocol that achieves this, which involves thermalizing and changing the system's Hamiltonian so as to avoid quantum coherences. Using concepts and techniques pertaining to single-shot statistical mechanics, we furthermore prove that the heat dissipated is exponentially close to the minimal amount possible not just on average, but guaranteed with high confidence in every run. Moreover, we exploit the protocol to design a quantum heat engine that works near the Carnot efficiency in finite time.

  7. Reexamination of quantum bit commitment: The possible and the impossible

    International Nuclear Information System (INIS)

    D'Ariano, Giacomo Mauro; Kretschmann, Dennis; Schlingemann, Dirk; Werner, Reinhard F.

    2007-01-01

    Bit commitment protocols whose security is based on the laws of quantum mechanics alone are generally held to be impossible. We give a strengthened and explicit proof of this result. We extend its scope to a much larger variety of protocols, which may have an arbitrary number of rounds, in which both classical and quantum information is exchanged, and which may include aborts and resets. Moreover, we do not consider the receiver to be bound to a fixed 'honest' strategy, so that 'anonymous state protocols', which were recently suggested as a possible way to beat the known no-go results, are also covered. We show that any concealing protocol allows the sender to find a cheating strategy, which is universal in the sense that it works against any strategy of the receiver. Moreover, if the concealing property holds only approximately, the cheat goes undetected with a high probability, which we explicitly estimate. The proof uses an explicit formalization of general two-party protocols, which is applicable to more general situations, and an estimate about the continuity of the Stinespring dilation of a general quantum channel. The result also provides a natural characterization of protocols that fall outside the standard setting of unlimited available technology and thus may allow secure bit commitment. We present such a protocol whose security, perhaps surprisingly, relies on decoherence in the receiver's laboratory

  8. Learning may need only a few bits of synaptic precision

    Science.gov (United States)

    Baldassi, Carlo; Gerace, Federica; Lucibello, Carlo; Saglietti, Luca; Zecchina, Riccardo

    2016-05-01

    Learning in neural networks poses peculiar challenges when using discretized rather then continuous synaptic states. The choice of discrete synapses is motivated by biological reasoning and experiments, and possibly by hardware implementation considerations as well. In this paper we extend a previous large deviations analysis which unveiled the existence of peculiar dense regions in the space of synaptic states which accounts for the possibility of learning efficiently in networks with binary synapses. We extend the analysis to synapses with multiple states and generally more plausible biological features. The results clearly indicate that the overall qualitative picture is unchanged with respect to the binary case, and very robust to variation of the details of the model. We also provide quantitative results which suggest that the advantages of increasing the synaptic precision (i.e., the number of internal synaptic states) rapidly vanish after the first few bits, and therefore that, for practical applications, only few bits may be needed for near-optimal performance, consistent with recent biological findings. Finally, we demonstrate how the theoretical analysis can be exploited to design efficient algorithmic search strategies.

  9. Reexamination of quantum bit commitment: The possible and the impossible

    Science.gov (United States)

    D'Ariano, Giacomo Mauro; Kretschmann, Dennis; Schlingemann, Dirk; Werner, Reinhard F.

    2007-09-01

    Bit commitment protocols whose security is based on the laws of quantum mechanics alone are generally held to be impossible. We give a strengthened and explicit proof of this result. We extend its scope to a much larger variety of protocols, which may have an arbitrary number of rounds, in which both classical and quantum information is exchanged, and which may include aborts and resets. Moreover, we do not consider the receiver to be bound to a fixed “honest” strategy, so that “anonymous state protocols,” which were recently suggested as a possible way to beat the known no-go results, are also covered. We show that any concealing protocol allows the sender to find a cheating strategy, which is universal in the sense that it works against any strategy of the receiver. Moreover, if the concealing property holds only approximately, the cheat goes undetected with a high probability, which we explicitly estimate. The proof uses an explicit formalization of general two-party protocols, which is applicable to more general situations, and an estimate about the continuity of the Stinespring dilation of a general quantum channel. The result also provides a natural characterization of protocols that fall outside the standard setting of unlimited available technology and thus may allow secure bit commitment. We present such a protocol whose security, perhaps surprisingly, relies on decoherence in the receiver’s laboratory.

  10. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  11. Improved Iris Recognition through Fusion of Hamming Distance and Fragile Bit Distance.

    Science.gov (United States)

    Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J

    2011-12-01

    The most common iris biometric algorithm represents the texture of an iris using a binary iris code. Not all bits in an iris code are equally consistent. A bit is deemed fragile if its value changes across iris codes created from different images of the same iris. Previous research has shown that iris recognition performance can be improved by masking these fragile bits. Rather than ignoring fragile bits completely, we consider what beneficial information can be obtained from the fragile bits. We find that the locations of fragile bits tend to be consistent across different iris codes of the same eye. We present a metric, called the fragile bit distance, which quantitatively measures the coincidence of the fragile bit patterns in two iris codes. We find that score fusion of fragile bit distance and Hamming distance works better for recognition than Hamming distance alone. To our knowledge, this is the first and only work to use the coincidence of fragile bit locations to improve the accuracy of matches.

  12. Energy-Efficient Hybrid Spintronic-Straintronic Nonvolatile Reconfigurable Equality Bit Comparator

    Science.gov (United States)

    Biswas, Ayan K.; Atulasimha, Jayasimha; Bandyopadhyay, Supriyo

    We propose and analyze a “spintronic/straintronic” reconfigurable equality bit comparator implemented with a nanowire spin valve whose two contacts are two-phase multiferroic nanomagnets and possess bistable magnetization. A reference bit is “written” into a stable magnetization state of one contact and an input bit in that of the other with electrically generated strain. The spin-valve’s resistance is lowered (raised) if the bits match (do not match). Multiple comparators can be interfaced in parallel with a magneto-tunneling junction to determine if an N-bit input stream matches an N-bit reference stream bit by bit. The system is robust against thermal noise at room temperature and a 16-bit comparator can operate at ˜743MHz while dissipating ˜28fJ per cycle. This implementation is more energy-efficient than CMOS-based implementations and the reference bits can be stored in the comparator itself without the need for refresh cycles or the need to fetch them from a remote memory for comparison. That improves reliability, speed and security.

  13. Dual-Hop FSO Transmission Systems over Gamma-Gamma Turbulence with Pointing Errors

    KAUST Repository

    Zedini, Emna

    2016-11-18

    In this paper, we analyze the end-to-end performance of dual-hop free-space optical (FSO) fixed gain relaying systems under heterodyne detection and intensity modulation with direct detection techniques in the presence of atmospheric turbulence as well as pointing errors. In particular, we derive the cumulative distribution function (CDF) of the end-to-end signal-to-noise ratio (SNR) in exact closed-form in terms of the bivariate Fox’s H function. Capitalizing on this CDF expression, novel closed-form expressions for the outage probability, the average bit-error rate (BER) for different modulation schemes, and the ergodic capacity of dual-hop FSO transmission systems are presented. Moreover, we present very tight asymptotic results for the outage probability and the average BER at high SNR regime in terms of simple elementary functions and we derive the diversity order of the considered system. By using dual-hop FSO relaying, we demonstrate a better system performance as compared to the single FSO link. Numerical and Monte-Carlo simulation results are provided to verify the accuracy of the newly proposed results, and a perfect agreement is observed.

  14. Multihop communications over CSI-assisted relay IM/DD FSO systems with pointing errors

    KAUST Repository

    Zedini, Emna

    2015-09-14

    In this paper, the end-to-end performance of a multihop free-space optical system with amplify-and-forward channel-state-information-assisted relays using intensity modulation with direct detection technique over Gamma-Gamma turbulence fading with pointing error impairments is studied. More specifically, novel closed-form expressions for the moment generating function, the cumulative distribution function, and the probability density function of the end-to-end signal-to-noise ratio (SNR) are derived in terms of the Meijer\\'s G function. Based on these formulas, closed-form bounds for the outage probability, the average bit-error rate (BER) of a variety of modulation schemes, the moments, and the ergodic capacity are presented. Furthermore, by using the asymptotic expansion of the Meijer\\'s G function at high SNR, accurate asymptotic results are introduced for the outage probability, the average BER and the ergodic capacity in terms of simple elementary functions. For the capacity, novel asymptotic results at low and high SNR regimes are also derived through the moments. All the analytical results are verified via computer-based Monte-carlo simulations.

  15. Learning from Errors.

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-03

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the beneficial effects are particularly salient when individuals strongly believe that their error is correct: Errors committed with high confidence are corrected more readily than low-confidence errors. Corrective feedback, including analysis of the reasoning leading up to the mistake, is crucial. Aside from the direct benefit to learners, teachers gain valuable information from errors, and error tolerance encourages students' active, exploratory, generative engagement. If the goal is optimal performance in high-stakes situations, it may be worthwhile to allow and even encourage students to commit and correct errors while they are in low-stakes learning situations rather than to assiduously avoid errors at all costs.

  16. Implementation of RSA 2048-bit and AES 256-bit with Digital Signature for Secure Electronic Health Record Application

    Directory of Open Access Journals (Sweden)

    Mohamad Ali Sadikin

    2016-10-01

    Full Text Available This research addresses the implementation of encryption and digital signature technique for electronic health record to prevent cybercrime such as robbery, modification and unauthorised access. In this research, RSA 2048-bit algorithm, AES 256-bit and SHA 256 will be implemented in Java programming language. Secure Electronic Health Record Information (SEHR application design is intended to combine given services, such as confidentiality, integrity, authentication, and nonrepudiation. Cryptography is used to ensure the file records and electronic documents for detailed information on the medical past, present and future forecasts that have been given only to the intended patients. The document will be encrypted using an encryption algorithm based on NIST Standard. In the application, there are two schemes, namely the protection and verification scheme. This research uses black-box testing and whitebox testing to test the software input, output, and code without testing the process and design that occurs in the system.We demonstrated the implementation of cryptography in SEHR. The implementation of encryption and digital signature in this research can prevent archive thievery.

  17. Reproducibility of isotope ratio measurements

    International Nuclear Information System (INIS)

    Elmore, D.

    1981-01-01

    The use of an accelerator as part of a mass spectrometer has improved the sensitivity for measuring low levels of long-lived radionuclides by several orders of magnitude. However, the complexity of a large tandem accelerator and beam transport system has made it difficult to match the precision of low energy mass spectrometry. Although uncertainties for accelerator measured isotope ratios as low as 1% have been obtained under favorable conditions, most errors quoted in the literature for natural samples are in the 5 to 20% range. These errors are dominated by statistics and generally the reproducibility is unknown since the samples are only measured once

  18. Scaling vectors of attoJoule per bit modulators

    Science.gov (United States)

    Sorger, Volker J.; Amin, Rubab; Khurgin, Jacob B.; Ma, Zhizhen; Dalir, Hamed; Khan, Sikandar

    2018-01-01

    Electro-optic modulation performs the conversion between the electrical and optical domain with applications in data communication for optical interconnects, but also for novel optical computing algorithms such as providing nonlinearity at the output stage of optical perceptrons in neuromorphic analog optical computing. While resembling an optical transistor, the weak light-matter-interaction makes modulators 105 times larger compared to their electronic counterparts. Since the clock frequency for photonics on-chip has a power-overhead sweet-spot around tens of GHz, ultrafast modulation may only be required in long-distance communication, not for short on-chip links. Hence, the search is open for power-efficient on-chip modulators beyond the solutions offered by foundries to date. Here, we show scaling vectors towards atto-Joule per bit efficient modulators on-chip as well as some experimental demonstrations of novel plasmonic modulators with sub-fJ/bit efficiencies. Our parametric study of placing different actively modulated materials into plasmonic versus photonic optical modes shows that 2D materials overcompensate their miniscule modal overlap by their unity-high index change. Furthermore, we reveal that the metal used in plasmonic-based modulators not only serves as an electrical contact, but also enables low electrical series resistances leading to near-ideal capacitors. We then discuss the first experimental demonstration of a photon-plasmon-hybrid graphene-based electro-absorption modulator on silicon. The device shows a sub-1 V steep switching enabled by near-ideal electrostatics delivering a high 0.05 dB V-1 μm-1 performance requiring only 110 aJ/bit. Improving on this demonstration, we discuss a plasmonic slot-based graphene modulator design, where the polarization of the plasmonic mode aligns with graphene’s in-plane dimension; where a push-pull dual-gating scheme enables 2 dB V-1 μm-1 efficient modulation allowing the device to be just 770 nm

  19. Influence of Bit Depth on Subjective Video Quality Assessment for High Resolutions

    Directory of Open Access Journals (Sweden)

    Juraj Bienik

    2017-01-01

    Full Text Available This paper deals with the influence of bit depth on the subjective video quality assessment. To achieve this goal, eight video sequences, each representing a different content prototype, were analysed. Subjective evaluation was performed using the ACR method. The analysed video sequences were encoded to 8 and 10-bit bit depth. Two most used compression standards H.264 and H.265 were evaluated with 1, 3, 5, 10 and 15 Mbps bitrate in Full HD and UHD resolution. Finally, the perceived quality of both compression standards using the subjective tests with emphasis on bit-depth was compared. From the results we can state, that the practical application of 10-bit bit depth is not appropriate for Full HD resolution in the range of bitrate from 1 to 15 Mbps, for Ultra HD resolution, it is appropriate only for videos encoded by H.265/HEVC compression standard.

  20. Low dose rate gamma ray induced loss and data error rate of multimode silica fibre links

    International Nuclear Information System (INIS)

    Breuze, G.; Fanet, H.; Serre, J.

    1993-01-01

    Fiber optics data transmission from numerous multiplexed sensors, is potentially attractive for nuclear plant applications. Multimode silica fiber behaviour during steady state gamma ray exposure is studied as a joint programme between LETI CE/SACLAY and EDF Renardieres: transmitted optical power and bit error rate have been measured on a 100 m optical fiber

  1. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  2. The Sharpe ratio of estimated efficient portfolios

    OpenAIRE

    Kourtis, Apostolos

    2016-01-01

    Investors often adopt mean-variance efficient portfolios for achieving superior risk-adjusted returns. However, such portfolios are sensitive to estimation errors, which affect portfolio performance. To understand the impact of estimation errors, I develop simple and intuitive formulas of the squared Sharpe ratio that investors should expect from estimated efficient portfolios. The new formulas show that the expected squared Sharpe ratio is a function of the length of the available data, the ...

  3. Patent search and review on roller-bit bearings seals and lubrication systems. [State-of-the-art

    Energy Technology Data Exchange (ETDEWEB)

    Maurer, W.C.

    1975-10-14

    Over 300 patents on bit design were reviewed, and the more important ones were abstracted. These patents were divided into three groups dealing with roller bit bearings, seals, and lubrication systems. Review of these patents helps identify the problems encountered by previous bit designers and establishes the current state-of-the-art of roller bit design. This report can be used as a reference for designing improved bits both for the petroleum and the geothermal industries.

  4. Development and Testing of a Jet Assisted Polycrystalline Diamond Drilling Bit. Phase II Development Efforts

    Energy Technology Data Exchange (ETDEWEB)

    David S. Pixton

    1999-09-20

    Phase II efforts to develop a jet-assisted rotary-percussion drill bit are discussed. Key developments under this contract include: (1) a design for a more robust polycrystalline diamond drag cutter; (2) a new drilling mechanism which improves penetration and life of cutters; and (3) a means of creating a high-pressure mud jet inside of a percussion drill bit. Field tests of the new drill bit and the new robust cutter are forthcoming.

  5. Error performance of digital subscriber lines in the presence of impulse noise

    Science.gov (United States)

    Kerpez, Kenneth J.; Gottlieb, Albert M.

    1995-05-01

    This paper describes the error performance of the ISDN basic access digital subscriber line (DSL), the high bit rate digital subscriber line (HDSL), and the asymmetric digital subscriber line (ADSL) in the presence of impulse noise. Results are found by using data from the 1986 NYNEX impulse noise survey in simulations. It is shown that a simple uncoded ADSL would have an order of magnitude more errored seconds than DSL and HDSL.

  6. Optimal finite-time erasure of a classical bit

    Science.gov (United States)

    Zulkowski, Patrick R.; DeWeese, Michael R.

    2014-05-01

    Information erasure inevitably leads to the generation of heat. Minimizing this dissipation will be crucial for developing small-scale information processing systems, but little is known about the optimal procedures required. We have obtained closed-form expressions for maximally efficient erasure cycles for deletion of a classical bit of information stored by the position of a particle diffusing in a double-well potential. We find that the extra heat generated beyond the Landauer bound is proportional to the square of the Hellinger distance between the initial and final states divided by the cycle duration, which quantifies how far out of equilibrium the system is driven. Finally, we demonstrate close agreement between the exact optimal cycle and the protocol found using a linear response framework.

  7. Bitlis Etnografya Müzesi'nde Bulunan Geleneksel Giysiler

    OpenAIRE

    Sökmen, Sultan

    2016-01-01

    Bu çalışmada, Bitlis Etnografya Müzesi'nde sergilenmekte olan geleneksel giysilerin kumaş, renk, süsleme ve teknik özelliklerinin tanıtımı amaçlanmıştır. Bu amaç doğrultusunda ilgili makamlardan yazılı izin alınarak müze vitrinlerinde sergilenmekte olan ve depolarda koruma altına alınan giysilerin fotoğrafları çekilmiş, gerekli ölçümleri yapılarak teknik özellikleri belirlenmiş, hammadde ve süsleme özellikleri incelenmiştir. Müzede bulunan geleneksel giysi örnekleri yeterince zengin olma...

  8. Bit-Serial Adder Based on Quantum Dots

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarress, Katayoon; Spotnitz, Mathew

    2003-01-01

    A proposed integrated circuit based on quantum-dot cellular automata (QCA) would function as a bit-serial adder. This circuit would serve as a prototype building block for demonstrating the feasibility of quantum-dots computing and for the further development of increasingly complex and increasingly capable quantum-dots computing circuits. QCA-based bit-serial adders would be especially useful in that they would enable the development of highly parallel and systolic processors for implementing fast Fourier, cosine, Hartley, and wavelet transforms. The proposed circuit would complement the QCA-based circuits described in "Implementing Permutation Matrices by Use of Quantum Dots" (NPO-20801), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 42 and "Compact Interconnection Networks Based on Quantum Dots" (NPO-20855), which appears elsewhere in this issue. Those articles described the limitations of very-large-scale-integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCA-based signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes. To enable a meaningful description of the proposed bit-serial adder, it is necessary to further recapitulate the description of a quantum-dot cellular automation from the first-mentioned prior article: A quantum-dot cellular automaton contains four quantum dots positioned at the corners of a square cell. The cell contains two extra mobile electrons that can tunnel (in the

  9. A Fast Dynamic 64-bit Comparator with Small Transistor Count

    Directory of Open Access Journals (Sweden)

    Chua-Chin Wang

    2002-01-01

    Full Text Available In this paper, we propose a 64-bit fast dynamic CMOS comparator with small transistor count. Major features of the proposed comparator are the rearrangement and re-ordering of transistors in the evaluation block of a dynamic cell, and the insertion of a weak n feedback inverter, which helps the pull-down operation to ground. The simulation results given by pre-layout tools, e.g. HSPICE, and post-layout tools, e.g. TimeMill, reveal that the delay is around 2.5 ns while the operating clock rate reaches 100 MHz. A physical chip is fabricated to verify the correctness of our design by using UMC (United Microelectronics Company 0.5 μm (2P2M technology.

  10. Contador asincrónico de cuatro bits

    Directory of Open Access Journals (Sweden)

    Iván Jaramillo J.

    1999-01-01

    Full Text Available Este artículo describe el diseño e implementación del conttulor asincrónico de cuatro bits, realizado a partir de experiencias académicas en la Universidad Nacional de Colombia. Primero se menciona la estructura del contador, al igual que la caracterización de las celdas básicas (con base en las cuales se hizo este proyecto, y se muestran los resultados de la simulación eléctrica. Después, se describen las pruebas efectuadas al chip, y se hace una comparación entre estos datos experimentales y los obtenidos en la simulación.

  11. Measurement accuracy, bit-strings, Manthey's quaternions, and RRQM

    International Nuclear Information System (INIS)

    Noyes, H.P.

    1995-01-01

    The author continues the discussion started last year. By now three potentially divergent research programs have surfaced in ANPA: (1) the Bastin-Kilmister understanding of the combinatorial hierarchy (Clive's open-quotes Menshevikclose quotes position); (2) the author's bit-string open-quotes Theory of Everythingclose quotes (which Clive has dubbed open-quotes Bolshevikclose quotes); (3) Manthey's cycle hierarchy based on co-occurrence and mutual exclusion that Clive helped him map onto quaternions (as an yet unnamed heresy?). Unless a common objective can be found, these three points of view will continue to diverge. The authors suggests the reconstruction of relativistic quantum mechanism (RRQM) as a reasonable, and attainable, goal that might aid convergence rather than divergence

  12. Geneva University - Superconducting flux quantum bits: fabricated quantum objects

    CERN Multimedia

    2007-01-01

    Ecole de physique Département de physique nucléaire et corspusculaire 24, Quai Ernest-Ansermet 1211 GENEVE 4 Tél: (022) 379 62 73 Fax: (022) 379 69 92 Lundi 29 janvier 2007 COLLOQUE DE LA SECTION DE PHYSIQUE 17 heures - Auditoire Stueckelberg Superconducting flux quantum bits: fabricated quantum objects Prof. Hans Mooij / Kavli Institute of Nanoscience, Delft University of Technology The quantum conjugate variables of a superconductor are the charge or number of Cooper pairs, and the phase of the order parameter. In circuits that contain small Josephson junctions, these quantum properties can be brought forward. In Delft we study so-called flux qubits, superconducting rings that contain three small Josephson junctions. When a magnetic flux of half a flux quantum is applied to the loop, there are two states with opposite circulating current. For suitable junction parameters, a quantum superposition of those macroscopic states is possible. Transitions can be driven with resonant microwaves. These quantum ...

  13. ALAT SOLAR TRACKER BERBASIS MIKROKONTROLER 8 BIT ATMega8535

    Directory of Open Access Journals (Sweden)

    I Wayan Sutaya

    2015-07-01

    Full Text Available Penelitian yang telah dilakukan ini adalah membuat prototipe alat solar tracker. Alat ini berfungsi untuk menggerakkan modul sel surya sehingga permukaan sel surya bisa terkena sinar matahari secara maksimal. Saat ini sel surya di Indonesia banyak terpasang secara statis atau tidak dilengkapi alat solar tracker sehingga energi matahari tidak diterima secara maksimal. Hal ini menyebabkan sel surya yang terpasang di beberapa daerah di Indonesia tidak memberikan manfaat yang optimal. Alat solar tracker yang dihasilkan pada penelitian ini diharapkan sebagai solusi dari permasalahan yang ada saat ini. Mikrokontroler 8 bit ATMega8535 yang digunakan sebagai otak utama dari alat solar tracker menjadikan alat ini menjadi berbiaya murah. Serta teknik memprogram dengan bahasa assembly menjadikan alat ini tahan terhadap kegagalan sistem. Solar tracker ini sudah bisa beroperasi dengan baik dan cocok digunakan pada modul sel surya berukuran kecil.

  14. A low power 12-bit ADC for nuclear instrumentation

    International Nuclear Information System (INIS)

    Adachi, R.; Landis, D.; Madden, N.; Silver, E.; LeGros, M.

    1992-10-01

    A low power, successive approximation, analog-to-digital converter (ADC) for low rate, low cost, battery powered applications is described. The ADC is based on a commercial 50 mW successive approximation CMOS device (CS5102). An on-chip self-calibration circuit reduces the inherent differential nonlinearity to 7%. A further reduction of the differential nonlinearity to 0.5% is attained with a four bit Gatti function. The Gatti function is distributed to minimize battery power consumption. All analog functions reside with the ADC while the noisy digital functions reside in the personal computer based histogramming memory. Fiber optic cables carry afl digital information between the ADC and the personal computer based histogramming memory

  15. A one-bit approach for image registration

    Science.gov (United States)

    Nguyen, An Hung; Pickering, Mark; Lambert, Andrew

    2015-02-01

    Motion estimation or optic flow computation for automatic navigation and obstacle avoidance programs running on Unmanned Aerial Vehicles (UAVs) is a challenging task. These challenges come from the requirements of real-time processing speed and small light-weight image processing hardware with very limited resources (especially memory space) embedded on the UAVs. Solutions towards both simplifying computation and saving hardware resources have recently received much interest. This paper presents an approach for image registration using binary images which addresses these two requirements. This approach uses translational information between two corresponding patches of binary images to estimate global motion. These low bit-resolution images require a very small amount of memory space to store them and allow simple logic operations such as XOR and AND to be used instead of more complex computations such as subtractions and multiplications.

  16. Bit-depth scalable video coding with new inter-layer prediction

    Directory of Open Access Journals (Sweden)

    Chiang Jui-Chiu

    2011-01-01

    Full Text Available Abstract The rapid advances in the capture and display of high-dynamic range (HDR image/video content make it imperative to develop efficient compression techniques to deal with the huge amounts of HDR data. Since HDR device is not yet popular for the moment, the compatibility problems should be considered when rendering HDR content on conventional display devices. To this end, in this study, we propose three H.264/AVC-based bit-depth scalable video-coding schemes, called the LH scheme (low bit-depth to high bit-depth, the HL scheme (high bit-depth to low bit-depth, and the combined LH-HL scheme, respectively. The schemes efficiently exploit the high correlation between the high and the low bit-depth layers on the macroblock (MB level. Experimental results demonstrate that the HL scheme outperforms the other two schemes in some scenarios. Moreover, it achieves up to 7 dB improvement over the simulcast approach when the high and low bit-depth representations are 12 bits and 8 bits, respectively.

  17. The Digital Agenda of Virtual Currencies. Can BitCoin Become a Global Currency?

    OpenAIRE

    KANCS D'ARTIS; CIAIAN PAVEL; MIROSLAVA RAJCANIOVA

    2015-01-01

    This paper identifies and analyzes BitCoin features which may facilitate Bitcoin to become a global currency, as well as characteristics which may impede the use of BitCoin as a medium of exchange, a unit of account and a store of value, and compares BitCoin with standard currencies with respect to the main functions of money. Among all analyzed BitCoin features, the extreme price volatility stands out most clearly compared to standard currencies. In order to understand the reasons for such e...

  18. Field drilling tests on improved geothermal unsealed roller-cone bits. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, R.R.; Jones, A.H.; Winzenried, R.W.; Maish, A.B.

    1980-05-01

    The development and field testing of a 222 mm (8-3/4 inch) unsealed, insert type, medium hard formation, high-temperature bit are described. Increased performance was gained by substituting improved materials in critical bit components. These materials were selected on bases of their high temperature properties, machinability and heat treatment response. Program objectives required that both machining and heat treating could be accomplished with existing rock bit production equipment. Six of the experimental bits were subjected to air drilling at 240/sup 0/C (460/sup 0/F) in Franciscan graywacke at the Geysers (California). Performances compared directly to conventional bits indicate that in-gage drilling time was increased by 70%. All bits at the Geysers are subjected to reaming out-of-gage hole prior to drilling. Under these conditions the experimental bits showed a 30% increase in usable hole drilled, compared with the conventional bits. The materials selected improved roller wear by 200%, friction per wear by 150%, and lug wear by 150%. These tests indicate a potential well cost savings of 4 to 8%. Savings of 12% are considered possible with drilling procedures optimized for the experimental bits.

  19. Designing embedded systems with 32-bit PIC microcontrollers and MikroC

    CERN Document Server

    Ibrahim, Dogan

    2013-01-01

    The new generation of 32-bit PIC microcontrollers can be used to solve the increasingly complex embedded system design challenges faced by engineers today. This book teaches the basics of 32-bit C programming, including an introduction to the PIC 32-bit C compiler. It includes a full description of the architecture of 32-bit PICs and their applications, along with coverage of the relevant development and debugging tools. Through a series of fully realized example projects, Dogan Ibrahim demonstrates how engineers can harness the power of this new technology to optimize their embedded design

  20. Inborn errors of metabolism

    Science.gov (United States)

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman-Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2016:chap 205. Rezvani I, Rezvani GA. An ...

  1. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    Science.gov (United States)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  2. Drug Errors in Anaesthesiology

    Directory of Open Access Journals (Sweden)

    Rajnish Kumar Jain

    2009-01-01

    Full Text Available Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed.

  3. ATC operational error analysis.

    Science.gov (United States)

    1972-01-01

    The primary causes of operational errors are discussed and the effects of these errors on an ATC system's performance are described. No attempt is made to specify possible error models for the spectrum of blunders that can occur although previous res...

  4. VLSI IMPLEMENTATION OF NOVEL ROUND KEYS GENERATION SCHEME FOR CRYPTOGRAPHY APPLICATIONS BY ERROR CONTROL ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. SENTHILKUMAR

    2015-05-01

    Full Text Available A novel implementation of code based cryptography (Cryptocoding technique for multi-layer key distribution scheme is presented. VLSI chip is designed for storing information on generation of round keys. New algorithm is developed for reduced key size with optimal performance. Error Control Algorithm is employed for both generation of round keys and diffusion of non-linearity among them. Two new functions for bit inversion and its reversal are developed for cryptocoding. Probability of retrieving original key from any other round keys is reduced by diffusing nonlinear selective bit inversions on round keys. Randomized selective bit inversions are done on equal length of key bits by Round Constant Feedback Shift Register within the error correction limits of chosen code. Complexity of retrieving the original key from any other round keys is increased by optimal hardware usage. Proposed design is simulated and synthesized using VHDL coding for Spartan3E FPGA and results are shown. Comparative analysis is done between 128 bit Advanced Encryption Standard round keys and proposed round keys for showing security strength of proposed algorithm. This paper concludes that chip based multi-layer key distribution of proposed algorithm is an enhanced solution to the existing threats on cryptography algorithms.

  5. Two-Step Single Slope/SAR ADC with Error Correction for CMOS Image Sensor

    Directory of Open Access Journals (Sweden)

    Fang Tang

    2014-01-01

    Full Text Available Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μm CMOS technology. The chip area of the proposed ADC is 7 μm × 500 μm. The measurement results show that the energy efficiency figure-of-merit (FOM of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μm2·cycles/sample.

  6. EnhancedBit: Unleashing the potential of the unchoking policy in the BitTorrent  protocol

    CERN Document Server

    Atlidakis, V; Delis, A

    2014-01-01

    In this paper, we propose a modification to the BitTorrent protocol related to its peer unchoking policy. In particular, we apply a novel optimistic unchoking approach that improves the quality of inter-connections amongst peers, i.e., increases the number of directly-connected and interested-in-cooperation peers without penalizing underutilized and/or idle peers. Our optimistic unchoking policy takes into consideration the number of clients currently interested in downloading from a peer that is to be unchoked. Our conjecture is that peers having few clients interested in downloading data from them, should be favored with optimistic unchoke intervals. This enables the peers in question to receive data since they become unchoked faster and in turn, they will trigger the interest of additional clients. In contrast, peers with plenty of "interested" clients should enjoy a lower priority to be selected as planned optimistic unchoked, since these peers likely have enough data to forward; nevertheless, they receiv...

  7. Performance enhancement of MC-CDMA system through novel sensitive bit algorithm aided turbo multi user detection.

    Science.gov (United States)

    Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan

    2015-01-01

    Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI.

  8. Performance enhancement of MC-CDMA system through novel sensitive bit algorithm aided turbo multi user detection.

    Directory of Open Access Journals (Sweden)

    Rasadurai Kumaravel

    Full Text Available Multi carrier code division multiple access (MC-CDMA system is a promising multi carrier modulation (MCM technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA and orthogonal frequency division multiplexing (OFDM. The OFDM parts reduce multipath fading and inter symbol interference (ISI and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI appears. The MAI is one of the factors that degrade the bit error rate (BER performance of MC-CDMA system. The multiuser detection (MUD and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA aided logarithmic-Maximum a-Posteriori algorithm (Log MAP based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI.

  9. Beamforming under Quantization Errors in Wireless Binaural Hearing Aids

    Directory of Open Access Journals (Sweden)

    Srinivasan Sriram

    2008-01-01

    Full Text Available Improving the intelligibility of speech in different environments is one of the main objectives of hearing aid signal processing algorithms. Hearing aids typically employ beamforming techniques using multiple microphones for this task. In this paper, we discuss a binaural beamforming scheme that uses signals from the hearing aids worn on both the left and right ears. Specifically, we analyze the effect of a low bit rate wireless communication link between the left and right hearing aids on the performance of the beamformer. The scheme is comprised of a generalized sidelobe canceller (GSC that has two inputs: observations from one ear, and quantized observations from the other ear, and whose output is an estimate of the desired signal. We analyze the performance of this scheme in the presence of a localized interferer as a function of the communication bit rate using the resultant mean-squared error as the signal distortion measure.

  10. Beamforming under Quantization Errors in Wireless Binaural Hearing Aids

    Directory of Open Access Journals (Sweden)

    Kees Janse

    2008-09-01

    Full Text Available Improving the intelligibility of speech in different environments is one of the main objectives of hearing aid signal processing algorithms. Hearing aids typically employ beamforming techniques using multiple microphones for this task. In this paper, we discuss a binaural beamforming scheme that uses signals from the hearing aids worn on both the left and right ears. Specifically, we analyze the effect of a low bit rate wireless communication link between the left and right hearing aids on the performance of the beamformer. The scheme is comprised of a generalized sidelobe canceller (GSC that has two inputs: observations from one ear, and quantized observations from the other ear, and whose output is an estimate of the desired signal. We analyze the performance of this scheme in the presence of a localized interferer as a function of the communication bit rate using the resultant mean-squared error as the signal distortion measure.

  11. Metastasis of tumor cells is enhanced by downregulation of Bit1.

    Directory of Open Access Journals (Sweden)

    Priya Prakash Karmali

    Full Text Available Resistance to anoikis, which is defined as apoptosis induced by loss of integrin-mediated cell attachment to the extracellular matrix, is a determinant of tumor progression and metastasis. We have previously identified the mitochondrial Bit1 (Bcl-2 inhibitor of transcription protein as a novel anoikis effector whose apoptotic function is independent from caspases and is uniquely controlled by integrins. In this report, we examined the possibility that Bit1 is suppressed during tumor progression and that Bit1 downregulation may play a role in tumor metastasis.Using a human breast tumor tissue array, we found that Bit1 expression is suppressed in a significant fraction of advanced stages of breast cancer. Targeted disruption of Bit1 via shRNA technology in lowly aggressive MCF7 cells conferred enhanced anoikis resistance, adhesive and migratory potential, which correlated with an increase in active Extracellular kinase regulated (Erk levels and a decrease in Erk-directed phosphatase activity. These pro-metastasis phenotypes were also observed following downregulation of endogenous Bit1 in Hela and B16F1 cancer cell lines. The enhanced migratory and adhesive potential of Bit1 knockdown cells is in part dependent on their high level of Erk activation since down-regulating Erk in these cells attenuated their enhanced motility and adhesive properties. The Bit1 knockdown pools also showed a statistically highly significant increase in experimental lung metastasis, with no differences in tumor growth relative to control clones in vivo using a BALB/c nude mouse model system. Importantly, the pulmonary metastases of Bit1 knockdown cells exhibited increased phospho-Erk staining.These findings indicate that downregulation of Bit1 conferred cancer cells with enhanced anoikis resistance, adhesive and migratory properties in vitro and specifically potentiated tumor metastasis in vivo. These results underscore the therapeutic importance of restoring Bit1

  12. Soft Error Vulnerability of Iterative Linear Algebra Methods

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; de Supinski, B

    2007-12-15

    Devices become increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft errors primarily caused problems for space and high-atmospheric computing applications. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming significant even at terrestrial altitudes. The soft error vulnerability of iterative linear algebra methods, which many scientific applications use, is a critical aspect of the overall application vulnerability. These methods are often considered invulnerable to many soft errors because they converge from an imprecise solution to a precise one. However, we show that iterative methods can be vulnerable to soft errors, with a high rate of silent data corruptions. We quantify this vulnerability, with algorithms generating up to 8.5% erroneous results when subjected to a single bit-flip. Further, we show that detecting soft errors in an iterative method depends on its detailed convergence properties and requires more complex mechanisms than simply checking the residual. Finally, we explore inexpensive techniques to tolerate soft errors in these methods.

  13. DNA multi-bit non-volatile memory and bit-shifting operations using addressable electrode arrays and electric field-induced hybridization.

    Science.gov (United States)

    Song, Youngjun; Kim, Sejung; Heller, Michael J; Huang, Xiaohua

    2018-01-18

    DNA has been employed to either store digital information or to perform parallel molecular computing. Relatively unexplored is the ability to combine DNA-based memory and logical operations in a single platform. Here, we show a DNA tri-level cell non-volatile memory system capable of parallel random-access writing of memory and bit shifting operations. A microchip with an array of individually addressable electrodes was employed to enable random access of the memory cells using electric fields. Three segments on a DNA template molecule were used to encode three data bits. Rapid writing of data bits was enabled by electric field-induced hybridization of fluorescently labeled complementary probes and the data bits were read by fluorescence imaging. We demonstrated the rapid parallel writing and reading of 8 (2 3 ) combinations of 3-bit memory data and bit shifting operations by electric field-induced strand displacement. Our system may find potential applications in DNA-based memory and computations.

  14. Performance evaluations of hybrid modulation with different optical labels over PDQ in high bit-rate OLS network systems.

    Science.gov (United States)

    Xu, M; Li, Y; Kang, T Z; Zhang, T S; Ji, J H; Yang, S W

    2016-11-14

    Two orthogonal modulation optical label switching(OLS) schemes, which are based on payload of polarization multiplexing-differential quadrature phase shift keying(POLMUX-DQPSK or PDQ) modulated with identifications of duobinary (DB) label and pulse position modulation(PPM) label, are researched in high bit-rate OLS network. The BER performance of hybrid modulation with payload and label signals are discussed and evaluated in theory and simulation. The theoretical BER expressions of PDQ, PDQ-DB and PDQ-PPM are given with analysis method of hybrid modulation encoding in different the bit-rate ratios of payload and label. Theoretical derivation results are shown that the payload of hybrid modulation has a certain gain of receiver sensitivity than payload without label. The sizes of payload BER gain obtained from hybrid modulation are related to the different types of label. The simulation results are consistent with that of theoretical conclusions. The extinction ratio (ER) conflicting between hybrid encoding of intensity and phase types can be compromised and optimized in OLS system of hybrid modulation. The BER analysis method of hybrid modulation encoding in OLS system can be applied to other n-ary hybrid modulation or combination modulation systems.

  15. Aircraft system modeling error and control error

    Science.gov (United States)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  16. Study of nonlinear phenomena in a multi-bit bandpass sigma delta modulator

    Energy Technology Data Exchange (ETDEWEB)

    Iu, H.H.C. [School of Electrical, Electronic and Computer Engineering, The University of Western Australia, 35 Stirling Highway, Crawley (Australia)]. E-mail: herbert@ee.uwa.edu.au

    2006-11-15

    Bandpass sigma delta modulators (SDMs) have applications in areas such as digital radio demodulation. It is well known that bandpass SDMs with single bit quantizers can exhibit nonlinear and complex state space dynamics like elliptical fractal patterns. These fractal patterns are usually confined in trapezoidal regions. In this paper, we consider bandpass SDMs with multi-bit quantizers. Their nonlinear dynamics is studied.

  17. Bit and Power Loading Approach for Broadband Multi-Antenna OFDM System

    DEFF Research Database (Denmark)

    Rahman, Muhammad Imadur; Das, Suvra S.; Wang, Yuanye

    2007-01-01

    In this work, we have studied bit and power allocation strategies for multi-antenna assisted Orthogonal Frequency Division Multiplexing (OFDM) systems and investigated the impact of different rates of bit and power allocations on various multi-antenna diversity schemes. It is observed that, if we...... allocations across OFDM sub-channels are required together for efficient exploitation of wireless channel....

  18. Enhanced bit rate-distance product impulse radio ultra-wideband over fiber link

    DEFF Research Database (Denmark)

    Rodes Lopez, Roberto; Jensen, Jesper Bevensee; Caballero Jambrina, Antonio

    2010-01-01

    We report on a record distance and bit rate-wireless impulse radio (IR) ultra-wideband (UWB) link with combined transmission over a 20 km long fiber link. We are able to improve the compliance with the regulated frequency emission mask and achieve bit rate-distance products as high as 16 Gbit/s·m....

  19. Application of time-hopping UWB range-bit rate performance in the UWB sensor networks

    NARCIS (Netherlands)

    Nascimento, J.R.V. do; Nikookar, H.

    2008-01-01

    In this paper, the achievable range-bit rate performance is evaluated for Time-Hopping (TH) UWB networks complying with the FCC outdoor emission limits in the presence of Multiple Access Interference (MAI). Application of TH-UWB range-bit rate performance is presented for UWB sensor networks.

  20. Possibility, impossibility, and cheat sensitivity of quantum-bit string commitment

    NARCIS (Netherlands)

    Buhrman, H.; Christandl, M.; Hayden, P.; Lo, H.-K.; Wehner, S.

    2008-01-01

    Unconditionally secure nonrelativistic bit commitment is known to be impossible in both the classical and the quantum worlds. But when committing to a string of n bits at once, how far can we stretch the quantum limits? In this paper, we introduce a framework for quantum schemes where Alice commits

  1. 0.5 ns resolution, 8-bit time-to-digital converter with flash ADC

    International Nuclear Information System (INIS)

    Sobczynski, C.

    1987-01-01

    A two-channel time-to-digital converter based on an 8-bit flash ADC is presented. The full scale time range of 127.5 ns is digitized to 8 bits providing 500 ps time resolution. The conversion time per channel is less than 800 ns. The design is foreseen to be implemented into Fastbus. (orig.)

  2. On algorithmic equivalence of instruction sequences for computing bit string functions

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.

    2015-01-01

    Every partial function from bit strings of a given length to bit strings of a possibly different given length can be computed by a finite instruction sequence that contains only instructions to set and get the content of Boolean registers, forward jump instructions, and a termination instruction. We

  3. On algorithmic equivalence of instruction sequences for computing bit string functions

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.

    2014-01-01

    Every partial function from bit strings of a given length to bit strings of a possibly different given length can be computed by a finite instruction sequence that contains only instructions to set and get the content of Boolean registers, forward jump instructions, and a termination instruction. We

  4. FlavBit : a GAMBIT module for computing flavour observables and likelihoods

    NARCIS (Netherlands)

    Bernlochner, F.U.; Chrząszcz, M.; Dal, L.A.; Farmer, B.; Jackson, P.; Kvellestad, A.; Mahmoudi, F.; Putze, A.; Rogan, C.; Scott, P.; Serra, N.; Weniger, C.; White, M.

    2017-01-01

    Flavour physics observables are excellent probes of new physics up to very high energy scales. Here we present FlavBit, the dedicated flavour physics module of the global-fitting package GAMBIT. FlavBit includes custom implementations of various likelihood routines for a wide range of flavour

  5. Dispersion Tolerance of 40 Gbaud Multilevel Modulation Formats with up to 3 bits per Symbol

    DEFF Research Database (Denmark)

    Jensen, Jesper Bevensee; Tokle, Torger; Geng, Yan

    2006-01-01

    We present numerical and experimental investigations of dispersion tolerance for multilevel phase- and amplitude modulation with up to 3 bits per symbol at a symbol rate of 40 Gbaud......We present numerical and experimental investigations of dispersion tolerance for multilevel phase- and amplitude modulation with up to 3 bits per symbol at a symbol rate of 40 Gbaud...

  6. Development of a Tool Condition Monitoring System for Impregnated Diamond Bits in Rock Drilling Applications

    Science.gov (United States)

    Perez, Santiago; Karakus, Murat; Pellet, Frederic

    2017-05-01

    The great success and widespread use of impregnated diamond (ID) bits are due to their self-sharpening mechanism, which consists of a constant renewal of diamonds acting at the cutting face as the bit wears out. It is therefore important to keep this mechanism acting throughout the lifespan of the bit. Nonetheless, such a mechanism can be altered by the blunting of the bit that ultimately leads to a less than optimal drilling performance. For this reason, this paper aims at investigating the applicability of artificial intelligence-based techniques in order to monitor tool condition of ID bits, i.e. sharp or blunt, under laboratory conditions. Accordingly, topologically invariant tests are carried out with sharp and blunt bits conditions while recording acoustic emissions (AE) and measuring-while-drilling variables. The combined output of acoustic emission root-mean-square value (AErms), depth of cut ( d), torque (tob) and weight-on-bit (wob) is then utilized to create two approaches in order to predict the wear state condition of the bits. One approach is based on the combination of the aforementioned variables and another on the specific energy of drilling. The two different approaches are assessed for classification performance with various pattern recognition algorithms, such as simple trees, support vector machines, k-nearest neighbour, boosted trees and artificial neural networks. In general, Acceptable pattern recognition rates were obtained, although the subset composed by AErms and tob excels due to the high classification performances rates and fewer input variables.

  7. Towards the generation of random bits at terahertz rates based on a chaotic semiconductor laser

    International Nuclear Information System (INIS)

    Kanter, Ido; Aviad, Yaara; Reidler, Igor; Cohen, Elad; Rosenbluh, Michael

    2010-01-01

    Random bit generators (RBGs) are important in many aspects of statistical physics and crucial in Monte-Carlo simulations, stochastic modeling and quantum cryptography. The quality of a RBG is measured by the unpredictability of the bit string it produces and the speed at which the truly random bits can be generated. Deterministic algorithms generate pseudo-random numbers at high data rates as they are only limited by electronic hardware speed, but their unpredictability is limited by the very nature of their deterministic origin. It is widely accepted that the core of any true RBG must be an intrinsically non-deterministic physical process, e.g. measuring thermal noise from a resistor. Owing to low signal levels, such systems are highly susceptible to bias, introduced by amplification, and to small nonrandom external perturbations resulting in a limited generation rate, typically less than 100M bit/s. We present a physical random bit generator, based on a chaotic semiconductor laser, having delayed optical feedback, which operates reliably at rates up to 300Gbit/s. The method uses a high derivative of the digitized chaotic laser intensity and generates the random sequence by retaining a number of the least significant bits of the high derivative value. The method is insensitive to laser operational parameters and eliminates the necessity for all external constraints such as incommensurate sampling rates and laser external cavity round trip time. The randomness of long bit strings is verified by standard statistical tests.

  8. Towards the generation of random bits at terahertz rates based on a chaotic semiconductor laser

    Science.gov (United States)

    Kanter, Ido; Aviad, Yaara; Reidler, Igor; Cohen, Elad; Rosenbluh, Michael

    2010-06-01

    Random bit generators (RBGs) are important in many aspects of statistical physics and crucial in Monte-Carlo simulations, stochastic modeling and quantum cryptography. The quality of a RBG is measured by the unpredictability of the bit string it produces and the speed at which the truly random bits can be generated. Deterministic algorithms generate pseudo-random numbers at high data rates as they are only limited by electronic hardware speed, but their unpredictability is limited by the very nature of their deterministic origin. It is widely accepted that the core of any true RBG must be an intrinsically non-deterministic physical process, e.g. measuring thermal noise from a resistor. Owing to low signal levels, such systems are highly susceptible to bias, introduced by amplification, and to small nonrandom external perturbations resulting in a limited generation rate, typically less than 100M bit/s. We present a physical random bit generator, based on a chaotic semiconductor laser, having delayed optical feedback, which operates reliably at rates up to 300Gbit/s. The method uses a high derivative of the digitized chaotic laser intensity and generates the random sequence by retaining a number of the least significant bits of the high derivative value. The method is insensitive to laser operational parameters and eliminates the necessity for all external constraints such as incommensurate sampling rates and laser external cavity round trip time. The randomness of long bit strings is verified by standard statistical tests.

  9. Up to 20 Gbit/s bit-rate transparent integrated interferometric wavelength converter

    DEFF Research Database (Denmark)

    Jørgensen, Carsten; Danielsen, Søren Lykke; Hansen, Peter Bukhave

    1996-01-01

    We present a compact and optimised multiquantum-well based, integrated all-active Michelson interferometer for 26 Gbit/s optical wavelength conversion. Bit-rate transparent operation is demonstrated with a conversion penalty well below 0.5 dB at bit-rates ranging from 622 Mbit/s to 20 Gbit/s....

  10. Errors in the estimation method for the rejection of vibrations in adaptive optics systems

    Science.gov (United States)

    Kania, Dariusz

    2017-06-01

    In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.

  11. Simple proof of the impossibility of bit commitment in generalized probabilistic theories using cone programming

    Science.gov (United States)

    Sikora, Jamie; Selby, John

    2018-04-01

    Bit commitment is a fundamental cryptographic task, in which Alice commits a bit to Bob such that she cannot later change the value of the bit, while, simultaneously, the bit is hidden from Bob. It is known that ideal bit commitment is impossible within quantum theory. In this work, we show that it is also impossible in generalized probabilistic theories (under a small set of assumptions) by presenting a quantitative trade-off between Alice's and Bob's cheating probabilities. Our proof relies crucially on a formulation of cheating strategies as cone programs, a natural generalization of semidefinite programs. In fact, using the generality of this technique, we prove that this result holds for the more general task of integer commitment.

  12. All-optical 2-bit header recognition and packet switching using polarization bistable VCSELs.

    Science.gov (United States)

    Hayashi, Daisuke; Nakao, Kazuya; Katayama, Takeo; Kawaguchi, Hitoshi

    2015-04-06

    We propose and evaluate an all-optical 2-bit header recognition and packet switching method using two 1.55-µm polarization bistable vertical-cavity surface-emitting lasers (VCSELs) and three optical switches. Polarization bistable VCSELs acted as flip-flop devices by using AND-gate operations of the header and set pulses, together with the reset pulses. Optical packets including 40-Gb/s non-return-to-zero pseudo-random bit-sequence payloads were successfully sent to one of four ports according to the state of two bits in the headers with a 4-bit 500-Mb/s return-to-zero format. The input pulse powers were 17.2 to 31.8 dB lower than the VCSEL output power. We also examined an extension of this method to multi-bit header recognition and packet switching.

  13. Warped Discrete Cosine Transform-Based Low Bit-Rate Block Coding Using Image Downsampling

    Directory of Open Access Journals (Sweden)

    Ertürk Sarp

    2007-01-01

    Full Text Available This paper presents warped discrete cosine transform (WDCT-based low bit-rate block coding using image downsampling. While WDCT aims to improve the performance of conventional DCT by frequency warping, the WDCT has only been applicable to high bit-rate coding applications because of the overhead required to define the parameters of the warping filter. Recently, low bit-rate block coding based on image downsampling prior to block coding followed by upsampling after the decoding process is proposed to improve the compression performance for low bit-rate block coders. This paper demonstrates that a superior performance can be achieved if WDCT is used in conjunction with image downsampling-based block coding for low bit-rate applications.

  14. Error detection method

    Science.gov (United States)

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  15. Latency and bit-error-rate evaluation for radio-over-ethernet in optical fiber front-haul networks

    DEFF Research Database (Denmark)

    Sayadi, Mohammadjavad; Rodríguez, Sebastián; Olmos, Juan José Vegas

    2018-01-01

    evaluate this Ethernet packet as a case of study for RoE applications. The packet is transmitted through different fiber spans, measuring the BER and latency on each case. The system achieves BER values below the FEC limit and a manageable latency. These results serve as a guideline and proof of concept...

  16. Analytical Approach to Calculation of Probability of Bit Error and Optimum Thresholds in Free-Space Optical Communication

    National Research Council Canada - National Science Library

    Namazi, Nader; Burris, Ray; Gilbreath, G. C

    2005-01-01

    Based on the wavelet transformation and adaptive Wiener Filtering, a new method was presented by the authors to perform the synchronization and detection of the binary data from the Free-Space Optical (FSO) signal [1]. It was shown in [1...

  17. Video Synchronization With Bit-Rate Signals and Correntropy Function

    Directory of Open Access Journals (Sweden)

    Igor Pereira

    2017-09-01

    Full Text Available We propose an approach for the synchronization of video streams using correntropy. Essentially, the time offset is calculated on the basis of the instantaneous transfer rates of the video streams that are extracted in the form of a univariate signal known as variable bit-rate (VBR. The state-of-the-art approach uses a window segmentation strategy that is based on consensual zero-mean normalized cross-correlation (ZNCC. This strategy has an elevated computational complexity, making its application to synchronizing online data streaming difficult. Hence, our proposal uses a different window strategy that, together with the correntropy function, allows the synchronization to be performed for online applications. This provides equivalent synchronization scores with a rapid offset determination as the streams come into the system. The efficiency of our approach has been verified through experiments that demonstrate its viability with values that are as precise as those obtained by ZNCC. The proposed approach scored 81 % in time reference classification against the equivalent 81 % of the state-of-the-art approach, requiring much less computational power.

  18. High Bit-Depth Medical Image Compression With HEVC.

    Science.gov (United States)

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  19. A short impossibility proof of quantum bit commitment

    Energy Technology Data Exchange (ETDEWEB)

    Chiribella, Giulio, E-mail: gchiribella@mail.tsinghua.edu.cn [Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University (China); D' Ariano, Giacomo Mauro, E-mail: dariano@unipv.it [QUIT group, Dipartimento di Fisica, via Bassi 6, 27100 Pavia (Italy); INFN Gruppo IV, Sezione di Pavia, via Bassi, 6, 27100 Pavia (Italy); Perinotti, Paolo, E-mail: paolo.perinotti@unipv.it [QUIT group, Dipartimento di Fisica, via Bassi 6, 27100 Pavia (Italy); INFN Gruppo IV, Sezione di Pavia, via Bassi, 6, 27100 Pavia (Italy); Schlingemann, Dirk, E-mail: d.schlingemann@tu-bs.de [ISI Foundation, Quantum Information Theory Unit, Viale S. Severo 65, 10133 Torino (Italy); Werner, Reinhard, E-mail: Reinhard.Werner@itp.uni-hannover.de [Institut für Theoretische Physik, Leibniz Universität Hannover, Appelstrasse 2, 30167 Hannover (Germany)

    2013-06-17

    Bit commitment protocols, whose security is based on the laws of quantum mechanics alone, are generally held to be impossible on the basis of a concealment–bindingness tradeoff (Lo and Chau, 1997 [1], Mayers, 1997 [2]). A strengthened and explicit impossibility proof has been given in D'Ariano et al. (2007) [3] in the Heisenberg picture and in a C{sup ⁎}-algebraic framework, considering all conceivable protocols in which both classical and quantum information is exchanged. In the present Letter we provide a new impossibility proof in the Schrödinger picture, greatly simplifying the classification of protocols and strategies using the mathematical formulation in terms of quantum combs (Chiribella et al., 2008 [4]), with each single-party strategy represented by a conditioned comb. We prove that assuming a stronger notion of concealment—for each classical communication history, not in average—allows Alice's cheat to pass also the worst-case Bob's test. The present approach allows us to restate the concealment–bindingness tradeoff in terms of the continuity of dilations of probabilistic quantum combs with the metric given by the comb discriminability-distance.

  20. Bit Level Synchronized MAC Protocol for Multireader RFID Networks

    Directory of Open Access Journals (Sweden)

    Namboodiri Vinod

    2010-01-01

    Full Text Available The operation of multiple RFID readers in close proximity results in interference between the readers. This issue is termed the reader collision problem and cannot always be solved by assigning them to different frequency channels due to technical and regulatory limitations. The typical solution is to separate the operation of such readers across time. This sequential operation, however, results in a long delay to identify all tags. We present a bit level synchronized (BLSync MAC protocol for multi-reader RFID networks that allows multiple readers to operate simultaneously on the same frequency channel. The BLSync protocol solves the reader collision problem by allowing all readers to transmit the same query at the same time. We analyze the performance of using the BLSync protocol and demonstrate benefits of 40%–50% in terms of tag reading delay for most settings. The benefits of BLSync, first demonstrated through analysis, are then validated and quantified through simulations on realistic reader-tag layouts.

  1. Transnational exchange of scientific data: The ``Bits of Power'' report

    Science.gov (United States)

    Berry, R. Stephen

    1998-07-01

    In 1994, the U.S. National Committee for the Committee on Data for Science and Technology (CODATA), organized under the Commission on Physical Sciences, Mathematics and Applications of the National Research Council established the Committee on Issues in the Transborder Flow of Scientific Data. The purpose of this Committee was to examine the current state of global access to scientific data, to identify strengths, problems and challenges confronting scientists now, or likely to arise in the next few years, and to make recommendations on building the strengths and ameliorating or avoiding the problems. The Committee's report appeared as the book Bits of Power: Issues in Global Access to Scientific Data (National Academy Press, Washington, D.C., 1997). This presentation is a brief summary of that report, particularly as it pertains to atomic and molecular data. The context is necessarily the evolution toward increasing electronic acquisition, archiving and distribution of scientific data. Thus the central issues were divided into the technological infrastructure, the issues for the sciences and scientists in the various disciplines, the economic aspects and the legal issues. For purposes of this study, the sciences fell naturally into four groups: the laboratory physical sciences, the biological sciences, the earth sciences and the astronomical and planetary sciences. Some of the substantive scientific aspects are specific to particular groups of sciences, but the matters of infrastructure, economic questions and legal issues apply, for the most part, to all the sciences.

  2. A CAMAC display module for fast bit-mapped graphics

    International Nuclear Information System (INIS)

    Abdel-Aal, R.E.

    1992-01-01

    In many data acquisition and analysis facilities for nuclear physics research, utilities for the display of two-dimensional (2D) images and spectra on graphics terminals suffer from low speed, poor resolution, and limit accuracy. Developed of CAMAC bit-mapped graphics modules for this purpose has been discouraged in the past by the large device count needed and the long times required to load the image data from the host computer into the CAMAC hardware; particularly since many such facilities have been designed to support fast DMA block transfers only for data acquisition into the host. This paper describes the design and implementation of a prototype CAMAC graphics display module with a resolution of 256x256 pixels at eight colours for which all components can be easily accommodated in a single-width package. Employed is a hardware technique which reduces the number of programmed CAMAC data transfer operations needed for writing 2D images into the display memory by approximately an order of magnitude, with attendant improvements in the display speed and CPU time consumption. Hardware and software details are given together with sample results. Information on the performance of the module in a typical VAX/MBD data acquisition environment is presented, including data on the mutual effects of simultaneous data acquisition traffic. Suggestions are made for further improvements in performance. (orig.)

  3. Video Synchronization With Bit-Rate Signals and Correntropy Function.

    Science.gov (United States)

    Pereira, Igor; Silveira, Luiz F; Gonçalves, Luiz

    2017-09-04

    We propose an approach for the synchronization of video streams using correntropy. Essentially, the time offset is calculated on the basis of the instantaneous transfer rates of the video streams that are extracted in the form of a univariate signal known as variable bit-rate (VBR). The state-of-the-art approach uses a window segmentation strategy that is based on consensual zero-mean normalized cross-correlation (ZNCC). This strategy has an elevated computational complexity, making its application to synchronizing online data streaming difficult. Hence, our proposal uses a different window strategy that, together with the correntropy function, allows the synchronization to be performed for online applications. This provides equivalent synchronization scores with a rapid offset determination as the streams come into the system. The efficiency of our approach has been verified through experiments that demonstrate its viability with values that are as precise as those obtained by ZNCC. The proposed approach scored 81 % in time reference classification against the equivalent 81 % of the state-of-the-art approach, requiring much less computational power.

  4. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  5. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signals......, which appear as self-clutter in the radar image. When digital techniques are used for generation and processing or the radar signal it is possible to reduce these error signals. In the paper the quadrature devices are analyzed, and two different error compensation methods are considered. The practical...

  6. PRESAGE: Protecting Structured Address Generation against Soft Errors

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    2016-12-28

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.

  7. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    Directory of Open Access Journals (Sweden)

    Zhongzhou Du

    2015-04-01

    Full Text Available The signal transmission module of a magnetic nanoparticle thermometer (MNPT was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias, was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA when the hardware system of the MNPT was designed with the aforementioned method.

  8. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  9. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  10. A Systematic Approach to Error Free Telemetry

    Science.gov (United States)

    2017-06-28

    interference problem created by utilizing two antennas to transmit the same telemetry signal [8]. This has also been referred to as the “two antenna...selection, commonly called Best Source Selection (BSS). Up until recently there was not a robust method to assess link quality, time- align each source...and then choose the best source on a bit-by-bit basis. The key here is not the time alignment or the bit-by-bit selection, but the accurate

  11. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  12. Medical error and disclosure.

    Science.gov (United States)

    White, Andrew A; Gallagher, Thomas H

    2013-01-01

    Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. © 2013 Elsevier B.V. All rights reserved.

  13. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  14. Effect of ambient light and bit depth of digital radiograph on observer performance in determination of endodontic file positioning.

    Science.gov (United States)

    Heo, Min-Suk; Han, Dong-Hun; An, Byung-Mo; Huh, Kyung-Hoe; Yi, Won-Jin; Lee, Sam-Sun; Choi, Soon-Chul

    2008-02-01

    To examine the effects of the luminance and bit depth of digital image on observer performance for determination of endodontic file positioning. Using extracted premolar teeth, no. 08 K-file was placed into the canal and positioned so that the tip was either flush or 1 mm short of the radiologic root apex. The samples were imaged with both conventional and digital radiographs at 8 and 12 bits. Eleven observers read the images under dark and bright condition, and receiver operating characteristics analysis was performed. Additionally, the interpreting time was measured. The 12-bit images showed similar observer performance compared with conventional images, and better than the 8-bit images. The interpretation time for bright condition and 8-bit images was longer than for dark condition and 12-bit images. Twelve-bit digital images were preferred to 8-bit for accurate determination of endodontic file position.

  15. A multi-bit rate interface movement compensated multimode coder for video conferencing

    Science.gov (United States)

    1982-04-01

    This report describes a multi-bit rate video coder for DARPA video conferencing applications. The coder can operate at any preselected transmission bit rate ranging from 1.5 Mb/s to 64 kb/s. The proposed National Command Authority Teleconferencing System (NCATS) is designed to connect several conferencing sites. The system provides shared audio, video and graphic spaces. The video conferencing system communicates dynamic images of participants to different conferencing sites. The system is designed to operate under different bandwidth constraints. Under emergency situations communications bandwidth can be drastically reduced to allow only for 64 kb/s to carry out the video conferencing system. Under normal conditions larger channel capacity is available for this service. In order to accommodate the above requirements, a video codec that can operate at different transmission bit rates is needed. This allows for upgrading of picture quality when there is sufficient bandwidth and a graceful reduction of picture quality under severe bandwidth limitations. The NTSC colour video signal sampled at 14.3 MHz (4 times the colour subcarrier frequency) and uniformly quantized to 8 bits per picture element, requires a transmission bit rate of 114 Mb/s. Such a high bit rate is economically prohibitive especially for video conferencing applications. In order to reduce the transmission bit rate, redundant information in the signal has to be removed and the specific video conferencing environment has to be exploited.

  16. Use of One Time Pad Algorithm for Bit Plane Security Improvement

    Science.gov (United States)

    Suhardi; Suwilo, Saib; Budhiarti Nababan, Erna

    2017-12-01

    BPCS (Bit-Plane Complexity Segmentation) which is one of the steganography techniques that utilizes the human vision characteristics that cannot see the change in binary patterns that occur in the image. This technique performs message insertion by making a switch to a high-complexity bit-plane or noise-like regions with bits of secret messages. Bit messages that were previously stored precisely result the message extraction process to be done easily by rearranging a set of previously stored characters in noise-like region in the image. Therefore the secret message becomes easily known by others. In this research, the process of replacing bit plane with message bits is modified by utilizing One Time Pad cryptography technique which aims to increase security in bit plane. In the tests performed, the combination of One Time Pad cryptographic algorithm to the steganography technique of BPCS works well in the insertion of messages into the vessel image, although in insertion into low-dimensional images is poor. The comparison of the original image with the stegoimage looks identical and produces a good quality image with a mean value of PSNR above 30db when using a largedimensional image as the cover messages.

  17. A preliminary study on the containment building integrity following BIT removal for nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Jong Young; Song, Dong Soo; Byun, Choong Sub [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2008-07-01

    Boron Injection Tank(BIT) is a component of the Safety Injection System, which its sole function is to provide concentrated boric acid to the reactor coolant in order to mitigate the consequences of postulated main steamline break accidents. Although BIT plays an important role in mitigating the accident, high concentration of 20,000ppm causes valve leakage, clog of precipitation and continuous heat tracing have to be provided. For the removal of BIT, benchmarking analysis is performed between COPATTA code used in final safety analysis report and CONTEMPT code to be used this study. CONTEMPT is well compatible with COPATTA. The sensitivity study for integrity is performed for the three cases of full double ended rupture at 102% power with diesel generator failure, 3.4m{sup 3} and 2400ppm BIT, 3.4m{sup 3} and 0ppm BIT and no volume of BIT. The results show that the deactivation of BIT is plausible for success.

  18. High bit rate optical transmission using midspan spectral inversion ...

    African Journals Online (AJOL)

    صﺧﻟﻣ. رﺎﺛآ نﻋ ﺔﺟﺗﺎﻧﻟا ءﺎﺿوﺿﻟا ﺔﻟﮐﺷﻣﻟ نﮐﻣﯾ. ﻟا. ﺔﻟﺣرﻣ. ﻟا. تﺗﺷﺗﻟاو ﺔﯾطﺧ رﯾﻐ. اﻟﻟ. نﻣ دﺣﻟا ﻲﻧو. ﺔﻓﺎﺳﻣ. ثﺑﻟا. لدﻌﻣو ،. ﻲﺋﺎﻧﺛﻟا. ) bit. (. ﺔﻟﺣرﻣﻟ. -. لوﺣﺗﻟا. -. لﯾدﻌﺗﻟا تﻻﺎﺣ لﻔﻘﻟا .ﻓ. ﻲ. ذھا. ﻟا. لﺎﻘﻣ. ،. ﺎﻧﺳرد. ﺿﯾوﻌﺗ. ﻵا تﺎ. رﺎﺛ. ﻟا. ﺔﯾطﺧﻟاو ﺔﯾطﺧﻟا رﯾﻐ. ﻋن. قﯾرط. ﺔﻟﺣرﻣﻟا دادﺗﻣا فﺻﺗﻧﻣ. ﺔﯾرﺻﺑﻟا. ﺔﻘﻓرﻣﻟا. ) OPC .(. ،ﻻوأ. ﻧ. رﺎﺛآ رﮭظ. ﻟا. تﺗﺷﺗ. ا. ظﻧ ﻲﻓ ﻲﻧوﻟ. ﺎم. OD8PSK. ) ﺔﯾﻟﺿﺎﻔﺗ. ىوﺗﺳﻣﻟا ﺔﯾﺋوﺿ. -8. ﺔﻟﺣرﻣ. -. لوﺣﺗﻟا. -. لﻔﻘﻟا. (،.

  19. The Virtual Solar Observatory at Eight and a Bit!

    Science.gov (United States)

    Davey, Alisdair R.; VSO Team

    2011-05-01

    The Virtual Solar Observatory (VSO) was the first virtual observatory in the solar and heliophysics data space. It first saw the light of day in 2003 with a mission to serve the solar physics community by enabling homogenous access to heterogeneous data, and hiding the gory details of doing so from the user. The VSO pioneered what was to become the "Small Box" methodology, setting out to provide only the services required to navigate the user to the data and then letting them directly transferred the data from the data providers. After eight and a bit years the VSO now serves data from 72 different instruments covering a multitude of space and ground based observatories, including data from SDO. Dealing with the volume of data from SDO has proved to be our most difficult challenge, forcing us from the small box approach to one where the various VSO sites not only serve SDO data, but are central to the distribution of the data within the US and to Europe and other parts of the world. With SDO data serving mostly in place we are now working on integration with the Heliophysics Event Knowledgebase (HEK) and including a number of new solar data sets in the VSO family. We have a complete VSO search interface in IDL now, enabling searching, downloading and processing solar data, all be done without leaving the IDL command line, and will be releasing a brand new web interface providing users and data providers, with the ability to create far more detailed and instrument specific searches. Eight years on and the VSO has plenty of work in front of it.

  20. Influence of the FEC Channel Coding on Error Rates and Picture Quality in DVB Baseband Transmission

    Directory of Open Access Journals (Sweden)

    T. Kratochvil

    2006-09-01

    Full Text Available The paper deals with the component analysis of DTV (Digital Television and DVB (Digital Video Broadcasting baseband channel coding. Used FEC (Forward Error Correction error-protection codes principles are shortly outlined and the simulation model applied in Matlab is presented. Results of achieved bit and symbol error rates and corresponding picture quality evaluation analysis are presented, including the evaluation of influence of the channel coding on transmitted RGB images and their noise rates related to MOS (Mean Opinion Score. Conclusion of the paper contains comparison of DVB channel codes efficiency.

  1. A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC

    Science.gov (United States)

    Ajiro, Takashi; Tsuchida, Kensei

    A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.

  2. RF-MEMS for future mobile applications: experimental verification of a reconfigurable 8-bit power attenuator up to 110 GHz

    International Nuclear Information System (INIS)

    Iannacci, J; Tschoban, C

    2017-01-01

    RF-MEMS technology is proposed as a key enabling solution for realising the high-performance and highly reconfigurable passive components that future communication standards will demand. In this work, we present, test and discuss a novel design concept for an 8-bit reconfigurable power attenuator, manufactured using the RF-MEMS technology available at the CMM-FBK, in Italy. The device features electrostatically controlled MEMS ohmic switches in order to select/deselect the resistive loads (both in series and shunt configuration) that attenuate the RF signal, and comprises eight cascaded stages (i.e. 8-bit), thus implementing 256 different network configurations. The fabricated samples are measured (S-parameters) from 10 MHz to 110 GHz in a wide range of different configurations, and modelled/simulated with Ansys HFSS. The device exhibits attenuation levels (S21) in the range from  −10 dB to  −60 dB, up to 110 GHz. In particular, S21 shows flatness from 15 dB down to 3–5 dB and from 10 MHz to 50 GHz, as well as fewer linear traces up to 110 GHz. A comprehensive discussion is developed regarding the voltage standing wave ratio, which is employed as a quality indicator for the attenuation levels. The margins of improvement at design level which are needed to overcome the limitations of the presented RF-MEMS device are also discussed. (paper)

  3. Digital PSK to BiO-L demodulator for 2 sup nx(bit rate) carrier

    Science.gov (United States)

    Shull, T. A.

    1979-01-01

    A phase shift key (PSK) to BiO-L demodulator which uses standard digital integrated circuits is discussed. The demodulator produces NRZ-L, bit clock, and BiO-L outputs from digital PSK input signals for which the carrier is a 2 to the Nth multiple of the bit rate. Various bit and carrier rates which are accommodated by changing various component values within the demodulator are described. The use of the unit for sinusoidal inputs as well as digital inputs is discussed.

  4. Cheat sensitive quantum bit commitment via pre- and post-selected quantum states

    Science.gov (United States)

    Li, Yan-Bing; Wen, Qiao-Yan; Li, Zi-Chen; Qin, Su-Juan; Yang, Ya-Tao

    2014-01-01

    Cheat sensitive quantum bit commitment is a most important and realizable quantum bit commitment (QBC) protocol. By taking advantage of quantum mechanism, it can achieve higher security than classical bit commitment. In this paper, we propose a QBC schemes based on pre- and post-selected quantum states. The analysis indicates that both of the two participants' cheat strategies will be detected with non-zero probability. And the protocol can be implemented with today's technology as a long-term quantum memory is not needed.

  5. Optimization of rock-bit life based on bearing failure criteria

    International Nuclear Information System (INIS)

    Feav, M.J.; Thorogood, J.L.; Whelehan, O.P.; Williamson, H.S.

    1992-01-01

    This paper reports that recent advances in rock-bit seal technology have allowed greater predictability of bearing life. Cone loss following bearing failure incurs costs related to remedial activities. A risk analysis approach, incorporating bearing-life relationships and the inter-dependence of drilling events, is used to formulate a bit-run cost-optimization method. The procedure enables a choice to be made between elastomeric and metal seals on a lowest-replacement-cost basis. The technique also provides a formal method for assessing the opportunity cost for using a device to detect bit-bearing failures downhole

  6. A 9pJ/bit SOP optical transceiver with 80 Gbps two-way bandwidth

    Science.gov (United States)

    Liu, Fengman; Li, Baoxia; Li, Zhihua; Wan, Lixi; Gao, Wei; Chu, Yanbiao; Du, Tianmin; Song, Jian; Xiang, Haifei; Wang, Haidong; Yang, Kun; Yang, Binbin

    2011-12-01

    The high-speed parallel optical transmitter module based on VCSEL/PD array, high-speed specialized integrated circuit, fiber array micro-optical components presents magnificent application and development potential. The coupling alignment between VCSEL/PD array and waveguide array has been reported using silicon optical bench (SiOB) [1-3]. In this paper, A passive coupling method based on SiOB and the packaging of the VCSEL/PD arrays are introduced; the coupling efficiency is about 80% with a misalignment tolerance of +/-15μm, optical cross talk is about -70dB. A silicon optical bench is fabricated as a platform for integrated photonic components. The thermal performance and electrical performance of optical sub-package is analyzed and optimized. A High density SOP Optical Transceiver is designed based on this coupling method. Signal integrity analysis and optimization of the high-density differential signal pairs on PCB is conducted and S parameters are extracted to optimize impedance and minimize the effects of discontinuity in the electrical channels. In addition, to suppress simultaneous switching noise (SSN) and optimize the target impedance, a novel embedded capacitor filter is used instead of the conventional power supply filter, the film capacitor measures 14μm in thickness, has a dielectric constant of 16 and a capacitance density of 1nF/cm2. The transceiver loop back diagram is shown; the bit error rate of the optical transceiver is tested loop back with a 231-1 PRBS pattern and found to be 10-12s-1 at 10Gbps while dissipating 90mW/channel.

  7. A wavelet-based ECG delineation algorithm for 32-bit integer online processing.

    Science.gov (United States)

    Di Marco, Luigi Y; Chiari, Lorenzo

    2011-04-03

    Since the first well-known electrocardiogram (ECG) delineator based on Wavelet Transform (WT) presented by Li et al. in 1995, a significant research effort has been devoted to the exploitation of this promising method. Its ability to reliably delineate the major waveform components (mono- or bi-phasic P wave, QRS, and mono- or bi-phasic T wave) would make it a suitable candidate for efficient online processing of ambulatory ECG signals. Unfortunately, previous implementations of this method adopt non-linear operators such as root mean square (RMS) or floating point algebra, which are computationally demanding. This paper presents a 32-bit integer, linear algebra advanced approach to online QRS detection and P-QRS-T waves delineation of a single lead ECG signal, based on WT. The QRS detector performance was validated on the MIT-BIH Arrhythmia Database (sensitivity Se = 99.77%, positive predictive value P+ = 99.86%, on 109010 annotated beats) and on the European ST-T Database (Se = 99.81%, P+ = 99.56%, on 788050 annotated beats). The ECG delineator was validated on the QT Database, showing a mean error between manual and automatic annotation below 1.5 samples for all fiducial points: P-onset, P-peak, P-offset, QRS-onset, QRS-offset, T-peak, T-offset, and a mean standard deviation comparable to other established methods. The proposed algorithm exhibits reliable QRS detection as well as accurate ECG delineation, in spite of a simple structure built on integer linear algebra.

  8. Comparison study of unequal error protection methods for one-dimensional signal constellations

    Science.gov (United States)

    Yu, Christopher C.; Newman, Daniel A.

    1995-01-01

    Unequal error protection is a method to improve bandwidth efficiency where the source data are characterized by having coded bits of varying sensitivity to channel errors. In this report, a framework for comparing the performance of two schemes for unequal error protection is developed. The first scheme (embedded) is based on a clever design of signal constellations with nonuniformly spaced signal points. The second scheme (time-sharing) is based on time-multiplexing concept. It is shown that in the presence of white Gaussian noise, the time-sharing scheme can achieve improved performance over the embedded scheme for one-dimensional signal constellations. This report also describes an application of unequal error protection for MPEG (Motion Pictures Experts Group) video transmission. Corruption in the visual quality of a single frame and the effect of propagatable errors for MPEG coding are both minimized by the use of unequal error protection.

  9. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  10. Errors in neuroradiology.

    Science.gov (United States)

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  11. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  12. An empirical study of the complexity and randomness of prediction error sequences

    Science.gov (United States)

    Ratsaby, Joel

    2011-07-01

    We investigate a population of binary mistake sequences that result from learning with parametric models of different order. We obtain estimates of their error, algorithmic complexity and divergence from a purely random Bernoulli sequence. We study the relationship of these variables to the learner's information density parameter which is defined as the ratio between the lengths of the compressed to uncompressed files that contain the learner's decision rule. The results indicate that good learners have a low information density ρ while bad learners have a high ρ. Bad learners generate mistake sequences that are atypically complex or diverge stochastically from a purely random Bernoulli sequence. Good learners generate typically complex sequences with low divergence from Bernoulli sequences and they include mistake sequences generated by the Bayes optimal predictor. Based on the static algorithmic interference model of [18] the learner here acts as a static structure which "scatters" the bits of an input sequence (to be predicted) in proportion to its information density ρ thereby deforming its randomness characteristics.

  13. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  14. Disclosure of medical errors.

    Science.gov (United States)

    Matlow, Anne; Stevens, Polly; Harrison, Christine; Laxer, Ronald M

    2006-12-01

    The 1999 release of the Institute of Medicine's document To Err is Human was akin to removing the lid of Pandora's box. Not only were the magnitude and impact of medical errors now apparent to those working in the health care industry, but consumers or health care were alerted to the occurrence of medical events causing harm. One specific solution advocated was the disclosure to patients and their families of adverse events resulting from medical error. Knowledge of the historical perspective, ethical underpinnings, and medico-legal implications gives us a better appreciation of current recommendations for disclosing adverse events resulting from medical error to those affected.

  15. Klasifikasi Bit-Plane Noise untuk Penyisipan Pesan pada Teknik Steganography BPCS Menggunakan Fuzzy Inference Sistem Mamdani

    Directory of Open Access Journals (Sweden)

    Rahmad Hidayat

    2015-04-01

    Full Text Available Bit-Plane Complexity Segmentation (BPCS is a fairly new steganography technique. The most important process in BPCS is the calculation of complexity value of a bit-plane. The bit-plane complexity is calculated by looking at the amount of bit changes contained in a bit-plane. If a bit-plane has a high complexity, the bi-plane is categorized as a noise bit-plane that does not contain valuable information on the image. Classification of the bit-plane using the set cripst set (noise/not is not fair, where a little difference of the value will significantly change the status of the bit-plane. The purpose of this study is to apply the principles of fuzzy sets to classify the bit-plane into three sets that are informative, partly informative, and the noise region. Classification of the bit-plane into a fuzzy set is expected to classify the bit-plane in a more objective approach and ultimately message capacity of the images can be improved by using the Mamdani fuzzy inference to take decisions which bit-plane will be replaced with a message based on the classification of bit-plane and the size of the message that will be inserted. This research is able to increase the capability of BPCS steganography techniques to insert a message in bit-pane with more precise so that the container image quality would be better. It can be seen that the PSNR value of original image and stego-image is only slightly different.

  16. An electrically reprogrammable 1024 bits MNOS ROM using MNOS-SOS e/d technology

    International Nuclear Information System (INIS)

    Mackowiak, E.; Le Goascoz, V.

    1976-01-01

    A 1024 bits fully decoded electrically writable and erasable non volatile ROM is described. Memory cells and peripheral circuits are made using P channel silicon on sapphire enhancement depletion technology [fr

  17. A Methodology and Tool for Investigation of Artifacts Left by the BitTorrent Client

    Directory of Open Access Journals (Sweden)

    Algimantas Venčkauskas

    2016-05-01

    Full Text Available The BitTorrent client application is a popular utility for sharing large files over the Internet. Sometimes, this powerful utility is used to commit cybercrimes, like sharing of illegal material or illegal sharing of legal material. In order to help forensics investigators to fight against these cybercrimes, we carried out an investigation of the artifacts left by the BitTorrent client. We proposed a methodology to locate the artifacts that indicate the BitTorrent client activity performed. Additionally, we designed and implemented a tool that searches for the evidence left by the BitTorrent client application in a local computer running Windows. The tool looks for the four files holding the evidence. The files are as follows: *.torrent, dht.dat, resume.dat, and settings.dat. The tool decodes the files, extracts important information for the forensic investigator and converts it into XML format. The results are combined into a single result file.

  18. Ultimate DWDM format in fiber-true bit-parallel solitons on WDM beams

    Science.gov (United States)

    Yeh, C.; Bergman, L. A.

    2000-01-01

    Whether true solitons can exist on WDM beams (and in what form) is a question that is generally unknown. This paper will discuss an answer to this question and a demonstration of the bit-parallel WDM transmission.

  19. Quantum bit string commitment protocol using polarization of mesoscopic coherent states

    International Nuclear Information System (INIS)

    Mendonca, Fabio Alencar; Ramos, Rubens Viana

    2008-01-01

    In this work, we propose a quantum bit string commitment protocol using polarization of mesoscopic coherent states. The protocol is described and its security against brute force and quantum cloning machine attack is analyzed

  20. Quantum bit string commitment protocol using polarization of mesoscopic coherent states

    Science.gov (United States)

    Mendonça, Fábio Alencar; Ramos, Rubens Viana

    2008-02-01

    In this work, we propose a quantum bit string commitment protocol using polarization of mesoscopic coherent states. The protocol is described and its security against brute force and quantum cloning machine attack is analyzed.

  1. FastBit: an efficient indexing technology for accelerating data-intensive science

    International Nuclear Information System (INIS)

    Wu Kesheng

    2005-01-01

    FastBit is a software tool for searching large read-only datasets. It organizes user data in a column-oriented structure which is efficient for on-line analytical processing (OLAP), and utilizes compressed bitmap indices to further speed up query processing. Analyses have proven the compressed bitmap index used in FastBit to be theoretically optimal for onedimensional queries. Compared with other optimal indexing methods, bitmap indices are superior because they can be efficiently combined to answer multi-dimensional queries whereas other optimal methods can not. In this paper, we first describe the searching capability of FastBit, then briefly highlight two applications that make extensive use of FastBit, namely Grid Collector and DEX

  2. FastBit: An Efficient Indexing Technology For AcceleratingData-Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng

    2005-06-27

    FastBit is a software tool for searching large read-only data sets. It organizes user data in a column-oriented structure which is efficient for on-line analytical processing (OLAP), and utilizes compressed bitmap indices to further speed up query processing. Analyses have proven the compressed bitmap index used in FastBit to be theoretically optimal for one-dimensional queries. Compared with other optimal indexing methods, bitmap indices are superior because they can be efficiently combined to answer multi-dimensional queries whereas other optimal methods cannot. In this paper, we first describe the searching capability of FastBit, then briefly highlight two applications that make extensive use of FastBit, namely Grid Collector and DEX.

  3. FastBit: an efficient indexing technology for accelerating data-intensive science

    Science.gov (United States)

    Wu, Kesheng

    2005-01-01

    FastBit is a software tool for searching large read-only datasets. It organizes user data in a column-oriented structure which is efficient for on-line analytical processing (OLAP), and utilizes compressed bitmap indices to further speed up query processing. Analyses have proven the compressed bitmap index used in FastBit to be theoretically optimal for onedimensional queries. Compared with other optimal indexing methods, bitmap indices are superior because they can be efficiently combined to answer multi-dimensional queries whereas other optimal methods can not. In this paper, we first describe the searching capability of FastBit, then briefly highlight two applications that make extensive use of FastBit, namely Grid Collector and DEX.

  4. 20GSps 6-bit Low-Power Rad-Tolerant ADC, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed project aims to develop a 20GSps 6-bit radiation hardened analog to digital converter (ADC) required for microwave radiometers being developed for space...

  5. 20GSps 6-bit Low-Power Rad-Tolerant ADC, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed project aims to develop a 20GSps 6-bit ADC required for microwave radiometers being developed for space and airborne earth sensing applications and...

  6. Improved method of generating bit reversed numbers for calculating fast fourier transform

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.

    Fast Fourier Transform (FFT) is an important tool required for signal processing in defence applications. This paper reports an improved method for generating bit reversed numbers needed in calculating FFT using radix-2. The refined algorithm takes...

  7. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  8. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  9. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  10. Error Reporting Logic

    National Research Council Canada - National Science Library

    Jaspan, Ciera; Quan, Trisha; Aldrich, Jonathan

    2008-01-01

    ... it. In this paper, we introduce error reporting logic (ERL), an algorithm and tool that produces succinct explanations for why a target system violates a specification expressed in first order predicate logic...

  11. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  12. SOLAR TRACKER CERDAS DAN MURAH BERBASIS MIKROKONTROLER 8 BIT ATMega8535

    Directory of Open Access Journals (Sweden)

    I Wayan Sutaya

    2016-08-01

    Full Text Available prototipe produk solar tracker cerdas berbasis mikrokontroler AVR 8 bit. Solar tracker ini memasukkan filter digital IIR (Infinite Impulse Response pada bagian program. Memprogram filter ini membutuhkan perkalian 32 bit sedangkan prosesor yang tersedia pada mikrokontroler yang dipakai adalah 8 bit. Proses perkalian ini hanya bisa dilakukan pada mikrokontroler 8 bit dengan menggunakan bahasa assembly yang merupakan bahasa level hardware. Solar tracker cerdas yang menggunakan mikrokontroler 8 bit sebagai otak utama pada penelitian ini menjadikan produk ini berbiaya rendah. Pengujian yang dilakukan menunjukkan bahwa solar tracker cerdas dibandingkan dengan solar tracker biasa mempunyai perbedaan konsumsi daya baterai yang sangat signifikan yaitu terjadi penghematan sebesar 85 %. Besar penghematan konsumsi daya ini tentunya bukan sebuah angka konstan melainkan tergantung seberapa besar noise yang dikenakan pada alat solar tracker. Untuk sebuah perlakuan yang sama, maka semakin besar noise semakin besar pula perbedaan penghematan konsumsi daya pada solar tracker yang cerdas. Kata-kata kunci: solar tracker, filter digital, mikrokontroler 8 bit, konsumsi daya Abstract This research had made a prototype of smart solar tracker product based on microcontroller AVR 8 bit. The solar tracker used digital filter IIR (Infinite Impulse Response on its software. Filter programming needs 32 bit multiplication but the processor inside of the microcontroller that used in this research is 8 bit. This multiplication is only can be solved on microcontroller 8 bit by using assembly language in programming. The language is a hardware level language. The smart solar tracker using the microcontroller 8 bit as a main brain in this research made the product had a low cost. The test results show that the comparison in saving of baterai power consumption between the smart solar tracker and the normal one is 85 %. The percentage of the saving indubitably is not a constant

  13. HIGH-POWER TURBODRILL AND DRILL BIT FOR DRILLING WITH COILED TUBING

    Energy Technology Data Exchange (ETDEWEB)

    Robert Radtke; David Glowka; Man Mohan Rai; David Conroy; Tim Beaton; Rocky Seale; Joseph Hanna; Smith Neyrfor; Homer Robertson

    2008-03-31

    Commercial introduction of Microhole Technology to the gas and oil drilling industry requires an effective downhole drive mechanism which operates efficiently at relatively high RPM and low bit weight for delivering efficient power to the special high RPM drill bit for ensuring both high penetration rate and long bit life. This project entails developing and testing a more efficient 2-7/8 in. diameter Turbodrill and a novel 4-1/8 in. diameter drill bit for drilling with coiled tubing. The high-power Turbodrill were developed to deliver efficient power, and the more durable drill bit employed high-temperature cutters that can more effectively drill hard and abrasive rock. This project teams Schlumberger Smith Neyrfor and Smith Bits, and NASA AMES Research Center with Technology International, Inc (TII), to deliver a downhole, hydraulically-driven power unit, matched with a custom drill bit designed to drill 4-1/8 in. boreholes with a purpose-built coiled tubing rig. The U.S. Department of Energy National Energy Technology Laboratory has funded Technology International Inc. Houston, Texas to develop a higher power Turbodrill and drill bit for use in drilling with a coiled tubing unit. This project entails developing and testing an effective downhole drive mechanism and a novel drill bit for drilling 'microholes' with coiled tubing. The new higher power Turbodrill is shorter, delivers power more efficiently, operates at relatively high revolutions per minute, and requires low weight on bit. The more durable thermally stable diamond drill bit employs high-temperature TSP (thermally stable) diamond cutters that can more effectively drill hard and abrasive rock. Expectations are that widespread adoption of microhole technology could spawn a wave of 'infill development' drilling of wells spaced between existing wells, which could tap potentially billions of barrels of bypassed oil at shallow depths in mature producing areas. At the same time, microhole

  14. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  15. A multiple-substream unequal error-protection and error-concealment algorithm for SPIHT-coded video bitstreams.

    Science.gov (United States)

    Kim, Joohee; Mersereau, Russell M; Altunbasak, Yucel

    2004-12-01

    This paper presents a coordinated multiple-substream unequal error-protection and error-concealment algorithm for SPIHT-coded bitstreams transmitted over lossy channels. In the proposed scheme, we divide the video sequence corresponding to a group of pictures into two subsequences and independently encode each subsequence using a three-dimensional SPIHT algorithm. We use two different partitioning schemes to generate the substreams, each of which offers some advantages under the appropriate channel condition. Each substream is protected by an FEC-based unequal error-protection algorithm, which assigns unequal forward error correction codes to each bit plane. Any information that is lost during the transmission for any substream is estimated at the receiver by using the correlation between the substreams and the smoothness of the video signal. Simulation results show that the proposed multiple-substream UEP algorithm is simple, fast, and robust in hostile network conditions, and that the proposed error-concealment algorithm can achieve 2-3-dB PSNR gain over the case when error concealment is not used at high packet-loss rates.

  16. Design and Demonstration of a 30 GHz 16-bit Superconductor RSFQ Microprocessor

    Science.gov (United States)

    2015-03-10

    for Public Release; Distribution Unlimited Final Report: Design and Demonstration of a 30 GHz 16-bit Superconductor RSFQ Microprocessor The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Superconductor technology, RSFQ, RQL, processor design, arithmetic units, high-performance...Demonstration of a 30 GHz 16-bit Superconductor RSFQ Microprocessor Report Title The major objective of the project was to design and demonstrate operation

  17. DOA Parameter Estimation with 1-bit Quantization - Bounds, Methods and the Exponential Replacement

    OpenAIRE

    Stein, Manuel; Barbé, Kurt; Nossek, Josef A.

    2016-01-01

    While 1-bit analog-to-digital conversion (ADC) allows to significantly reduce the analog complexity of wireless receive systems, using the exact likelihood function of the hard-limiting system model in order to obtain efficient algorithms in the digital domain can make 1-bit signal processing challenging. If the signal model before the quantizer consists of correlated Gaussian random variables, the tail probability for a multivariate Gaussian distribution with N dimensions (general orthant pr...

  18. Re-use of Low Bandwidth Equipment for High Bit Rate Transmission Using Signal Slicing Technique

    DEFF Research Database (Denmark)

    Wagner, Christoph; Spolitis, S.; Vegas Olmos, Juan José

    : Massive fiber-to-the-home network deployment requires never ending equipment upgrades operating at higher bandwidth. We show effective signal slicing method, which can reuse low bandwidth opto-electronical components for optical communications at higher bit rates.......: Massive fiber-to-the-home network deployment requires never ending equipment upgrades operating at higher bandwidth. We show effective signal slicing method, which can reuse low bandwidth opto-electronical components for optical communications at higher bit rates....

  19. Support research for development of improved geothermal drill bits. Annual report

    Energy Technology Data Exchange (ETDEWEB)

    hendrickson, R.R.; Barker, L.M.; Green, S.J.; Winzenried, R.W.

    1977-06-01

    A full-scale geothermal wellbore simulator and geothermal seal testing machine were constructed. The major emphasis in the Phase II program, in addition to constructing the above research simulators, includes: simulated tests on full-scale components, i.e., full-scale bits; screening tests on elastomeric seals under geothermal conditions; and initial considerations of research needs for development of sealed high-temperature bits. A detailed discussion of the work is presented. (MHR)

  20. Implementasi Kriptografi Algoritma Elgamal Dengan Steganografi Teknik Least Significant Bit (LSB) Berdasarkan Penyisipan Menggunakan Fungsi Linier

    OpenAIRE

    Nasution, Lidya Andiny

    2014-01-01

    Confidentiality of messages or data is owned by a person is an important thing. Safeguarding of confidentiality of the message can use a security message like Cryptography and steganography. This research used ElGamal algorithms for cryptographic and Least Significant Bit (LSB) techniques for Steganography. ElGamal algorithm use fermat's little theorem for checking prime numbers are used. Techniques least significant bit (LSB) of Steganography using a linear function to determine the location...

  1. Increasing the bit rate in OCDMA systems using pulse position modulation techniques.

    Science.gov (United States)

    Arbab, Vahid R; Saghari, Poorya; Haghi, Mahta; Ebrahimi, Paniz; Willner, Alan E

    2007-09-17

    We have experimentally demonstrated two novel pulse position modulation techniques, namely Double Pulse Position Modulation (2-PPM) and Differential Pulse Position Modulation (DPPM) in Time-Wavelength OCDMA systems that will operate at a higher bit rate compared to traditional OOK-OCDMA systems with the same bandwidth. With 2-PPM technique, the number of active users will be more than DPPM while their bit rate is almost the same. Both techniques provide variable quality of service in OCDMA networks.

  2. Inpatients’ medical prescription errors

    Directory of Open Access Journals (Sweden)

    Aline Melo Santos Silva

    2009-09-01

    Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.

  3. Text Encryption Scheme Realized with a Chaotic Pseudo-Random Bit Generator

    Directory of Open Access Journals (Sweden)

    Ch. K. Volos

    2013-09-01

    Full Text Available In this work a new encryption scheme, which is realized with a Chaotic Pseudo-Random Bit Generator (CPRBG based on a Logistic map, is presented. The proposed system is used for encrypting text files for the purpose of creating secure data bases. The Logistic map is the most studied discrete nonlinear map because it has been used in many scientific fields. Also, the fact, that this discrete map has a known algebraic distribution, made the Logistic map a good candidate for use in the design of random bit generators. The proposed CPRBG, which is very easily implemented, uses the X-OR function, in the bit sequences, that are produced by two Logistic maps with different initial conditions and system’s parameters, to achieve better results concerning the “randomness” of the produced bits sequence. The detailed results of the statistical testing on generated bit sequences, done by the most well known tests of randomness: the FIPS-140-2 suite tests, confirmed the specific characteristics expected of random bit sequences.

  4. A 14-bit 100-MS/s 85.2-dB SFDR pipelined ADC without calibration

    Science.gov (United States)

    Nan, Zhao; Hua, Luo; Qi, Wei; Huazhong, Yang

    2014-07-01

    This paper describes a 14-bit 100-MS/s calibration-free pipelined analog-to-digital converter (ADC). Choices for stage resolution as well as circuit topology are carefully considered to obtain high linearity without any calibration algorithm. An adjusted timing diagram with an additional clock phase is proposed to give residue voltage more settling time and minimize its distortion. The ADC employs an LVDS clock input buffer with low-jitter consideration to ensure good performance at high sampling rate. Implemented in a 0.18-μm CMOS technology, the ADC prototype achieves a spurious free dynamic range (SFDR) of 85.2 dB and signal-to-noise-and-distortion ratio (SNDR) of 63.4 dB with a 19.1-MHz input signal, while consuming 412-mW power at 2.0-V supply and occupying an area of 2.9 × 3.7mm2.

  5. Time-dependent characteristic of negative feedback optical amplifier at bit rates 10-Gbit/s based on an optical triode

    Science.gov (United States)

    Harada, Yuki; Azmi, Mohamad Syafiq; Azizan, Siti Aisyah; Matsutani, Takaomi; Maeda, Yoshinobu

    2015-01-01

    We proposed and demonstrated an all-optical triode based on a tandem wavelength converter using cross-gain modulation (XGM) in semiconductor optical amplifiers (SOAs). Negative feedback optical amplification scheme, which has the key advantages of reducing bit error rate and waveform reshaping at the output, was employed in this optical triode. This scheme utilizes an input signal and a negative feedback signal (a signal with reverse intensity to the input) and they were fed together into the optical amplifier. Manipulating the intensity of negative feedback signal enabled the noise suppression effect to be optimized and the outputs recorded improvements in bit error rate (BER) and also undergone waveform reshaping shown by the eye-pattern. In negative feedback optical amplifier, the negative feedback signal and input signal were fed into the SOA. However, due to XGM mechanism, there is a setback in which both signals could not be simultaneously fed. Therefore, by using an optical delay, negative feedback timing was manipulated and we investigate timing characteristics of negative feedback optical amplifier with BER and eye-pattern waveforms at 10 Gb/s.

  6. Tb/s physical random bit generation with bandwidth-enhanced chaos in three-cascaded semiconductor lasers.

    Science.gov (United States)

    Sakuraba, Ryohsuke; Iwakawa, Kento; Kanno, Kazutaka; Uchida, Atsushi

    2015-01-26

    We experimentally demonstrate fast physical random bit generation from bandwidth-enhanced chaos by using three-cascaded semiconductor lasers. The bandwidth-enhanced chaos is obtained with the standard bandwidth of 35.2 GHz, the effective bandwidth of 26.0 GHz and the flatness of 5.6 dB, whose waveform is used for random bit generation. Two schemes of single-bit and multi-bit extraction methods for random bit generation are carried out to evaluate the entropy rate and the maximum random bit generation rate. For single-bit generation, the generation rate at 20 Gb/s is obtained for physical random bit sequences. For multi-bit generation, the maximum generation rate at 1.2 Tb/s ( = 100 GS/s × 6 bits × 2 data) is equivalently achieved for physical random bit sequences whose randomness is verified by using both NIST Special Publication 800-22 and TestU01.

  7. Human error in aviation operations

    Science.gov (United States)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  8. Error monitoring in musicians

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-07-01

    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  9. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  10. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  11. Pediatric antidepressant medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Bundy, David G; Shore, Andrew D; Colantuoni, Elizabeth; Morlock, Laura L; Miller, Marlene R

    2010-01-01

    To describe inpatient and outpatient pediatric antidepressant medication errors. We analyzed all error reports from the United States Pharmacopeia MEDMARX database, from 2003 to 2006, involving antidepressant medications and patients younger than 18 years. Of the 451 error reports identified, 95% reached the patient, 6.4% reached the patient and necessitated increased monitoring and/or treatment, and 77% involved medications being used off label. Thirty-three percent of errors cited administering as the macrolevel cause of the error, 30% cited dispensing, 28% cited transcribing, and 7.9% cited prescribing. The most commonly cited medications were sertraline (20%), bupropion (19%), fluoxetine (15%), and trazodone (11%). We found no statistically significant association between medication and reported patient harm; harmful errors involved significantly more administering errors (59% vs 32%, p = .023), errors occurring in inpatient care (93% vs 68%, p = .012) and extra doses of medication (31% vs 10%, p = .025) compared with nonharmful errors. Outpatient errors involved significantly more dispensing errors (p errors due to inaccurate or omitted transcription (p errors. Family notification of medication errors was reported in only 12% of errors. Pediatric antidepressant errors often reach patients, frequently involve off-label use of medications, and occur with varying severity and type depending on location and type of medication prescribed. Education and research should be directed toward prompt medication error disclosure and targeted error reduction strategies for specific medication types and settings.

  12. Digitization errors using digital charge division positionsensitive detectors

    International Nuclear Information System (INIS)

    Berliner, R.; Mildner, D.F.R.; Pringle, O.A.

    1981-01-01

    The data acquisition speed and electronic stability of a charge division position-sensitive detector may be improved by using digital signal processing with a table look-up high speed multiply to form the charge division quotient. This digitization process introduces a positional quantization difficulty which reduces the detector position sensitivity. The degree of the digitization error is dependent on the pulse height spectrum of the detector and on the resolution or dynamic range of the system analog-to-digital converters. The effects have been investigated analytically and by computer simulation. The optimum algorithm for position sensing determination using 8-bit digitization and arithmetic has a digitization error of less than 1%. (orig.)

  13. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  14. Studying and comparing spectrum efficiency and error probability in GMSK and DBPSK modulation schemes

    Directory of Open Access Journals (Sweden)

    Juan Mario Torres Nova

    2008-09-01

    Full Text Available Gaussian minimum shift keying (GMSK and differential binary phase shift keying (DBPSK are two digital modulation schemes which are -frequently used in radio communication systems; however, there is interdependence in the use of its benefits (spectral efficiency, low bit error rate, low inter symbol interference, etc. Optimising one parameter creates problems for another; for example, the GMSK scheme succeeds in reducing bandwidth when introducing a Gaussian filter into an MSK (minimum shift ke-ying modulator in exchange for increasing inter-symbol interference in the system. The DBPSK scheme leads to lower error pro-bability, occupying more bandwidth; it likewise facilitates synchronous data transmission due to the receiver’s bit delay when re-covering a signal.

  15. Improved Energy Efficiency for Optical Transport Networks by Elastic Forward Error Correction

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Yankov, Metodi Plamenov; Berger, Michael Stübert

    2014-01-01

    is designed to work as a transparent add-on to transceivers running the optical transport network (OTN) protocol, adding an extra layer of elastic soft-decision FEC to the built-in hard-decision FEC implemented in OTN, while retaining interoperability with existing OTN equipment. In order to facilitate......In this paper we propose a scheme for reducing the energy consumption of optical links by means of adaptive forward error correction (FEC). The scheme works by performing on the fly adjustments to the code rate of the FEC, adding extra parity bits to the data stream whenever extra capacity...... the balance between effective data rate and FEC coding gain without any disruption to the live traffic. As a consequence, these automatic adjustments can be performed very often based on the current traffic demand and bit error rate performance of the links through the network. The FEC scheme itself...

  16. Calculating SPRT Interpolation Error

    Science.gov (United States)

    Filipe, E.; Gentil, S.; Lóio, I.; Bosma, R.; Peruzzi, A.

    2018-02-01

    Interpolation error is a major source of uncertainty in the calibration of standard platinum resistance thermometer (SPRT) in the subranges of the International Temperature Scale of 1990 (ITS-90). This interpolation error arises because the interpolation equations prescribed by the ITS-90 cannot perfectly accommodate all the SPRTs natural variations in the resistance-temperature behavior, and generates different forms of non-uniqueness. This paper investigates the type 3 non-uniqueness for fourteen SPRTs of five different manufacturers calibrated over the water-zinc subrange and demonstrates the use of the method of divided differences for calculating the interpolation error. The calculated maximum standard deviation of 0.25 mK (near 100°C) is similar to that observed in previous studies.

  17. Machine-learning-assisted correction of correlated qubit errors in a topological code

    Directory of Open Access Journals (Sweden)

    Paul Baireuther

    2018-01-01

    Full Text Available A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X and phase-flip (Z errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed. The long short-term memory layers of the recurrent neural network maintain their performance over a large number of quantum error correction cycles, making it a practical decoder for forthcoming experimental realizations of the surface code.

  18. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  19. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  20. Error management in audit firms: Error climate, type, and originator

    NARCIS (Netherlands)

    Gold, A.H.; Gronewold, U.; Salterio, S.E.

    2014-01-01

    This paper examines how the treatment of audit staff who discover errors in audit files by superiors affects their willingness to report these errors. The way staff are treated by superiors is labelled as the audit office error management climate. In a "blame-oriented" climate errors are not