Modeling of Bit Error Rate in Cascaded 2R Regenerators
DEFF Research Database (Denmark)
Öhman, Filip; Mørk, Jesper
2006-01-01
This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...... and the regenerating nonlinearity is investigated. It is shown that an increase in nonlinearity can compensate for an increase in noise figure or decrease in signal power. Furthermore, the influence of the improvement in signal extinction ratio along the cascade and the importance of choosing the proper threshold...
Approximate Minimum Bit Error Rate Equalization for Fading Channels
Directory of Open Access Journals (Sweden)
Levendovszky Janos
2010-01-01
Full Text Available A novel channel equalizer algorithm is introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithm is based on minimizing the bit error rate (BER using a fast approximation of its gradient with respect to the equalizer coefficients. This approximation is obtained by estimating the exponential summation in the gradient with only some carefully chosen dominant terms. The paper derives an algorithm to calculate these dominant terms in real-time. Summing only these dominant terms provides a highly accurate approximation of the true gradient. Combined with a fast adaptive channel state estimator, the new equalization algorithm yields better performance than the traditional zero forcing (ZF or minimum mean square error (MMSE equalizers. The performance of the new method is tested by simulations performed on standard wireless channels. From the performance analysis one can infer that the new equalizer is capable of efficient channel equalization and maintaining a relatively low bit error probability in the case of channels corrupted by frequency selectivity. Hence, the new algorithm can contribute to ensuring QoS communication over highly distorted channels.
Novel Relations between the Ergodic Capacity and the Average Bit Error Rate
Yilmaz, Ferkan
2012-01-01
Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their...
Novel relations between the ergodic capacity and the average bit error rate
Yilmaz, Ferkan
2011-11-01
Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.
A minimum bit error-rate detector for amplify and forward relaying systems
Ahmed, Qasim Zeeshan
2012-05-01
In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.
Ahmed, Qasim Zeeshan
2014-04-01
The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol. © 2002-2012 IEEE.
Threshold based Bit Error Rate Optimization in Four Wave Mixing Optical WDM Systems
Directory of Open Access Journals (Sweden)
Er. Karamjeet Kaur
2016-07-01
Full Text Available Optical communication is communication at a distance using light to carry information which can be performed visually or by using electronic devices. The trend toward higher bit rates in light-wave communication has interest in dispersion-shifted fibre to reduce dispersion penalties. At an equivalent time optical amplifiers have exaggerated interest in wavelength multiplexing. This paper describes optical communication systems where we discuss different optical multiplexing schemes. The effect of channel power depletion due to generation of Four Wave Mixing waves and the effect of FWM cross talk on the performance of a WDM receiver has been studied in this paper. The main focus is to minimize Bit Error Rate to increase the QoS of the optical WDM system.
Influence of wave-front aberrations on bit error rate in inter-satellite laser communications
Yang, Yuqiang; Han, Qiqi; Tan, Liying; Ma, Jing; Yu, Siyuan; Yan, Zhibin; Yu, Jianjie; Zhao, Sheng
2011-06-01
We derive the bit error rate (BER) of inter-satellite laser communication (lasercom) links with on-off-keying systems in the presence of both wave-front aberrations and pointing error, but without considering the noise of the detector. Wave-front aberrations induced by receiver terminal have no influence on the BER, while wave-front aberrations induced by transmitter terminal will increase the BER. The BER depends on the area S which is truncated out by the threshold intensity of the detector (such as APD) on the intensity function in the receiver plane, and changes with root mean square (RMS) of wave-front aberrations. Numerical results show that the BER rises with the increasing of RMS value. The influences of Astigmatism, Coma, Curvature and Spherical aberration on the BER are compared. This work can benefit the design of lasercom system.
Bit Error Rate Analysis for MC-CDMA Systems in Nakagami- Fading Channels
Directory of Open Access Journals (Sweden)
Li Zexian
2004-01-01
Full Text Available Multicarrier code division multiple access (MC-CDMA is a promising technique that combines orthogonal frequency division multiplexing (OFDM with CDMA. In this paper, based on an alternative expression for the -function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER of multiuser MC-CDMA systems in frequency-selective Nakagami- fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC or equal gain combining (EGC. The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.
Masud, M A; Rahman, M A
2010-01-01
In the beginning of 21st century there has been a dramatic shift in the market dynamics of telecommunication services. The transmission from base station to mobile or downlink transmission using M-ary Quadrature Amplitude modulation (QAM) and Quadrature phase shift keying (QPSK) modulation schemes are considered in Wideband-Code Division Multiple Access (W-CDMA) system. We have done the performance analysis of these modulation techniques when the system is subjected to Additive White Gaussian Noise (AWGN) and multipath Rayleigh fading are considered in the channel. The research has been performed by using MATLAB 7.6 for simulation and evaluation of Bit Error Rate (BER) and Signal-To-Noise Ratio (SNR) for W-CDMA system models. It is shows that the analysis of Quadrature phases shift key and 16-ary Quadrature Amplitude modulations which are being used in wideband code division multiple access system, Therefore, the system could go for more suitable modulation technique to suit the channel quality, thus we can d...
Directory of Open Access Journals (Sweden)
Claude D'Amours
2011-01-01
Full Text Available We analytically derive the upper bound for the bit error rate (BER performance of a single user multiple input multiple output code division multiple access (MIMO-CDMA system employing parity-bit-selected spreading in slowly varying, flat Rayleigh fading. The analysis is done for spatially uncorrelated links. The analysis presented demonstrates that parity-bit-selected spreading provides an asymptotic gain of 10log(Nt dB over conventional MIMO-CDMA when the receiver has perfect channel estimates. This analytical result concurs with previous works where the (BER is determined by simulation methods and provides insight into why the different techniques provide improvement over conventional MIMO-CDMA systems.
Ahmed, Qasim Zeeshan
2013-01-01
In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.
Cox, Christina B.; Coney, Thom A.
1999-01-01
The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.
Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.
2016-06-01
We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.
Nazrul Islam, A. K. M.; Majumder, S. P.
2015-06-01
Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.
Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link
Directory of Open Access Journals (Sweden)
Matteo Berioli
2007-05-01
Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.
Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link
Directory of Open Access Journals (Sweden)
Berioli Matteo
2007-01-01
Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.
Directory of Open Access Journals (Sweden)
James Osuru Mark
2011-01-01
Full Text Available The multicarrier code division multiple access (MC-CDMA system has received a considerable attention from researchers owing to its great potential in achieving high data rates transmission in wireless communications. Due to the detrimental effects of multipath fading the performance of the system degrades. Similarly, the impact of non-orthogonality of spreading codes can exist and cause interference. This paper addresses the performance of multicarrier code division multiple access system under the influence of frequency selective generalized η-µ fading channel and multiple access interference caused by other active users to the desired one. We apply Gaussian approximation technique to analyse the performance of the system. The avearge bit error rate is derived and expressed in Gauss hypergeometic functions. Maximal ratio combining diversity technique is utilized to alleviate the deleterious effect of multipath fading. We observed that the system performance improves when the parameter η increase or decreasse in format 1 or format 2 conditions respectively.
Bit error rate analysis of Wi-Fi and bluetooth under the interference of 2.45 GHz RFID
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
IEEE 802.11b WLAN (Wi-Fi) and IEEE 802.15.1 WPAN (bluetooth) are prevalent nowadays, and radio frequency identification (RFID) is an emerging technology which has wider applications. 802.11b occupies unlicensed industrial, scientific and medical (ISM) band (2.4-2.483 5 GHz) and uses direct sequence spread spectrum (DSSS) to alleviate the narrow band interference and fading. Bluetooth is also one user of ISM band and adopts frequency hopping spread spectrum (FHSS) to avoid the mutual interference. RFID can operate on multiple frequency bands, such as 135 KHz, 13.56 MHz and 2.45 GHz. When 2.45 GHz RFID device, which uses FHSS, collocates with 802.11b or bluetooth, the mutual interference is inevitable. Although DSSS and FHSS are applied to mitigate the interference, their performance degradation may be very significant. Therefore, in this article, the impact of 2.45 GHz RFID on 802.11b and bluetooth is investigated. Bit error rate (BER) of 802.11b and bluetooth are analyzed by establishing a mathematical model, and the simula-tion results are compared with the theoretical analysis to justify this mathematical model.
Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo
2016-01-01
We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.
Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo
2016-07-01
We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.
Bit error rate analysis of X-ray communication system%X射线通信系统的误码率分析∗
Institute of Scientific and Technical Information of China (English)
王律强; 苏桐; 赵宝升; 盛立志; 刘永安; 刘舵
2015-01-01
X-ray communication, which was firstly introduced by Keithe Gendreau in 2007, is potential to compete with conventional communication methods, such as microwave and laser communication, against space surroundings. As a result, a great deal of time and effort has been devoted to making the initial idea into reality in recent years. Eventually, the X-ray communication demonstration system based on the grid-controlled X-ray source and microchannel plate detector can deliver both audio and video information in a 6-meter vacuum tunnel. The point is how to evaluate this space X-ray demonstration system in a typical experimental way. The method is to design a specific board to measure the relationship between bit-error-rate and emitting power against various communicating distances. In addition, the data should be compared with the calculation and simulation results to estimate the referred theoretical model. The concept of using X-ray as signal carriers is confirmed by our first generation X-ray communication demonstration system. Specifically, the method is to use grid-controlled emission source as a transceiver while implementing the photon counting detector which can be regarded as an important orientation of future deep-space X-ray communication applications. As the key specification of any given communication system, bit-error-rate level should be informed first. In addition, the theoretical analysis by using Poisson noise model also has been implemented to support this novel communication concept. Previous experimental results indicated that the X-ray audio demonstration system requires a 10−4 bit-error-rate level with 25 kbps communication rate. The system bit-error-rate based on on-off keying (OOK) modulation is calculated and measured, which corresponds to the theoretical calculation commendably. Another point that should be taken into consideration is the emitting energy, which is the main restriction of current X-ray communication system. The designed
Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang
2015-07-01
Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.
Directory of Open Access Journals (Sweden)
Sulyman Ahmed Iyanda
2005-01-01
Full Text Available The severity of fading on mobile communication channels calls for the combining of multiple diversity sources to achieve acceptable error rate performance. Traditional approaches perform the combining of the different diversity sources using either the conventional selective diversity combining (CSC, equal-gain combining (EGC, or maximal-ratio combining (MRC schemes. CSC and MRC are the two extremes of compromise between performance quality and complexity. Some researches have proposed a generalized selection combining scheme (GSC that combines the best M branches out of the L available diversity resources (M ≤ L . In this paper, we analyze a generalized selection combining scheme based on a threshold criterion rather than a fixed-size subset of the best channels. In this scheme, only those diversity branches whose energy levels are above a specified threshold are combined. Closed-form analytical solutions for the BER performances of this scheme over Nakagami fading channels are derived. We also discuss the merits of this scheme over GSC.
Directory of Open Access Journals (Sweden)
Ibrahim A.Z. Qatawneh
2005-01-01
Full Text Available Digital communications systems use Multi tone Channel (MC transmission techniques with differentially encoded and differentially coherent demodulation. Today there are two principle MC application, one is for the high speed digital subscriber loop and the other is for the broadcasting of digital audio and video signals. In this study the comparison of multi carriers with OQPSK and Offset 16 QAM for high-bit rate wireless applications are considered. The comparison of Bit Error Rate (BER performance of Multi tone Channel (MC with offset quadrature amplitude modulation (Offset 16 QAM and offset quadrature phase shift keying modulation (OQPSK with guard interval in a fading environment is considered via the use of Monte Carlo simulation methods. BER results are presented for Offset 16 QAM using guard interval to immune the multi path delay for frequency Rayleigh fading channels and for two-path fading channels in the presence of Additive White Gaussian Noise (AWGN. The BER results are presented for Multi tone Channel (MC with differentially Encoded offset 16 Quadrature Amplitude Modulation (offset 16 QAM and MC with differentially Encoded offset quadrature phase shift keying modulation (OQPSK using guard interval for frequency flat Rician channel in the presence of Additive White Gaussian Noise (AWGN. The performance of multitone systems is also compared with equivalent differentially Encoded offset quadrature amplitude modulation (Offset 16 QAM and differentially Encoded offset quadrature phase shift keying modulation (OQPSKwith and without guard interval in the same fading environment.
Tajima, Hideharu; Yamada, Hirohisa; Hayashi, Tetsuya; Yamamoto, Masaki; Harada, Yasuhiro; Mori, Go; Akiyama, Jun; Maeda, Shigemi; Murakami, Yoshiteru; Takahashi, Akira
2008-07-01
Bit error rate (bER) of an energy-gap-induced super-resolution (EG-SR) read-only-memory (ROM) disc with a zinc oxide (ZnO) film was measured in Blu-ray Disc (BD) optics by the partial response maximum likelihood (PRML) detection method. The experimental capacity was 40 GB in a single-layered 120 mm disc, which was about 1.6 times as high as the commercially available BD with 25 GB capacity. BER near 1 ×10-5 was obtained in an EG-SR ROM disc with a tantalum (Ta) reflective film. Practically available characteristics, including readout power margin, readout cyclability, environmental resistance, tilt margins, and focus offset margin, were also confirmed in the EG-SR ROM disc with 40 GB capacity.
Yamada, Hirohisa; Hayashi, Tetsuya; Yamamoto, Masaki; Harada, Yasuhiro; Tajima, Hideharu; Maeda, Shigemi; Murakami, Yoshiteru; Takahashi, Akira
2009-03-01
Practically available readout characteristics were obtained in a dual-layer energy-gap-induced super-resolution (EG-SR) read-only-memory (ROM) disc with an 80 gigabytes (GB) capacity. One of the dual layers consisted of zinc oxide and titanium films and the other layer consisted of zinc oxide and tantalum films. Bit error rates better than 3.0×10-4 were obtained with a minimum readout power of approximately 1.6 mW in both layers using a Blu-ray Disc tester by a partial response maximum likelihood (PRML) detection method. The dual-layer disc showed good tolerances in disc tilts and focus offset and also showed good readout cyclability in both layers.
DEFF Research Database (Denmark)
Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso
2010-01-01
We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self......-heterodyne receivers perform down-conversion of radio frequency (RF) subcarrier signal. A theoretical model including noise analysis is constructed to calculate the Q factor and estimate the BER performance. Furthermore, we experimentally validate our prediction in the theoretical modeling. Both the experimental and...... theoretical results show that the PM link offers superior OSNR receiver sensitivity performance (higher than 6 dB) over the quadrature biased IM counterpart....
Directory of Open Access Journals (Sweden)
Rashmi Mongre
2014-09-01
Full Text Available In digital communication system design, the main objective is to receive data as similar as the data sent from the transmitter. It is important to analyze the system in term of probability of error to view the system's performance. Each modulation technique has different performance while dealing with signals, which normally are affected with noise. General explanation for BER is explained and simulated in this paper. It focuses on comparative performance analysis of BPSK, QPSK, 8PSK and 16PSK i.e. Mary PSK system where the value of M=2, 4, 8 and 16. VHSIC Hardware Description Language (HDL was used for committal to writing of the design. The Xilinx ISE 8.1i tool was used for synthesis of this project. ModelSim PE Student Edition 10.3c is used for functional simulation and logic verification of analog waveforms. The BER curves for different digital modulation techniques which are obtained after simulation are compared with theoretical curves. All the BER calculations are done assuming the channel as AWGN channel
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations. PMID:26560913
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.
Bit error rate optimization of an acousto-optic tracking system for free-space laser communications
Sofka, J.; Nikulin, V.
2006-02-01
Optical communications systems have been gaining momentum with the increasing demand for transmission bandwidth in the last several years. Optical cable based solutions have become an attractive alternative to copper based system in the most bandwidth demanding applications due to increased bandwidth and longer inter-repeater distances. The promise of similar benefits over radio communications systems is driving the research into free space laser communications. Along with increased communications bandwidth, a free space laser communications system offers lower power consumption and the possibility for covert data links due to the concentration of the energy of the laser into a narrow beam. A narrow beam, however, results in a requirement for much more accurate and agile steering, so that a data link can be maintained in a scenario of communication platforms in relative motion or in the presence of vibrations. This paper presents a laser beam tracking system employing an acousto-optic cell capable of deflecting a laser beam at a very high rate (order of tens of kHz). The tracking system is subjected to vibrations to simulate a realistic implementation, resulting in the increase of BER. The performance of the system can be significantly improved through digital control. A constant gain controller is complemented by a Kalman filter the parameters of which are optimized to achieve the lowest possible BER for a given vibrations spectrum.
Nakai, Kenya; Ohmaki, Masayuki; Takeshita, Nobuo; Hyot, Bérangère; André, Bernard; Poupinet, Ludovic
2010-08-01
Bit-error-rate (bER) evaluation using hardware (H/W) evaluation system is described for super-resolution near-field structure (super-RENS) read-only-memory (ROM) discs fabricated with a semiconductor material, In-Sb, as the super-resolution active layer. bER on the order of 10-5 below a criterion of 3.0×10-4 is obtained with the super-RENS ROM discs having random pattern data including a minimum pit length of 80 nm in partial response maximum likelihood of the (1,2,2,1) type. The disc tilt, focus offset, and read power offset margins based on bER of readout signals are measured for the super-RENS ROM discs and are almost acceptable for practical use. Significant improvement of read stability up to 40,000 cycles realized by introducing the ZrO2 interface layer is confirmed using the H/W evaluation system.
光纤信道压力对实际量子密钥分发误码率的影响%Influence of Fibre Channel Pressure on Actual Quantum Bit Error Rate
Institute of Scientific and Technical Information of China (English)
吴佳楠; 魏荣凯; 陈丽; 周成; 朱德新; 宋立军
2015-01-01
An actual peer to peer quantum key distribution experimental system of polarization encoding was built based on BB84 protocol under pressure testing conditions.Fibre channel pressure experiment about quantum key distribution was completed.The theoretical model of quantum bit error rate was established with positive operator valued measurement method.The research results show that under the same pressure,bit error rate increased with the increase of angle,the result was as theoretical arithmetic expected;and at the same angle,the bit error rate showed a gentle shock upward trend with the increase of pressure,and when the pressure exceeded a critical value,the bit error rate increased rapidly,approaching the limit,forcing the quantum key distribution system to reestablish a connection.%基于 BB84协议原理，构建压力环境下偏振编码的点对点实际量子密钥分发系统，进行光纤信道压力作用下的量子密钥分发实验，并采用半正定算子测量方法建立误码率分析模型。实验结果表明：相同作用力下，误码率随作用角度的增加而增大，与仿真结果相同；相同作用角下，误码率随作用力的增加呈平缓的震荡上升趋势，但当作用力超过某一临界值时，误码率会迅速提高，逼近极限值，迫使量子密钥分发系统重新建立连接。
Ultra low bit-rate speech coding
Ramasubramanian, V
2015-01-01
"Ultra Low Bit-Rate Speech Coding" focuses on the specialized topic of speech coding at very low bit-rates of 1 Kbits/sec and less, particularly at the lower ends of this range, down to 100 bps. The authors set forth the fundamental results and trends that form the basis for such ultra low bit-rates to be viable and provide a comprehensive overview of various techniques and systems in literature to date, with particular attention to their work in the paradigm of unit-selection based segment quantization. The book is for research students, academic faculty and researchers, and industry practitioners in the areas of speech processing and speech coding.
Rate Control for MPEG-4 Bit Stream
Institute of Scientific and Technical Information of China (English)
王振洲; 李桂苓
2003-01-01
For a very long time video processing dealt exclusively with fixed-rate sequences of rectangular shaped images. However, interest has been recently moving toward a more flexible concept in which the subject of the processing and encoding operations is a set of visual elements organized in both time and space in a flexible and arbitrarily complex way. The moving picture experts group (MPEG-4) standard supports this concept and its verification model (VM) encoder has adopted scalable rate control (SRC) as the rate control scheme, which is based on the spatial domain and compatible with constant bit rate (CBR) and variable bit rate (VBR). In this paper,a new rate control algorithm based on the DCT domain instead of the pixel domain is presented. More-over, macroblock level rate control scheme to compute the quantization step for each macroblock has been adopted. The experimental results show that the new algorithm can achieve a much better result than the original one in both peak signal-to-noise ratio (PSNR) and the coding bits, and that the new algorithm is more flexible than test model 5 (TM5) rate control algorithm.
Reading boundless error-free bits using a single photon
Guha, Saikat; Shapiro, Jeffrey H.
2013-06-01
We address the problem of how efficiently information can be encoded into and read out reliably from a passive reflective surface that encodes classical data by modulating the amplitude and phase of incident light. We show that nature imposes no fundamental upper limit to the number of bits that can be read per expended probe photon and demonstrate the quantum-information-theoretic trade-offs between the photon efficiency (bits per photon) and the encoding efficiency (bits per pixel) of optical reading. We show that with a coherent-state (ideal laser) source, an on-off (amplitude-modulation) pixel encoding, and shot-noise-limited direct detection (an overly optimistic model for commercial CD and DVD drives), the highest photon efficiency achievable in principle is about 0.5 bits read per transmitted photon. We then show that a coherent-state probe can read unlimited bits per photon when the receiver is allowed to make joint (inseparable) measurements on the reflected light from a large block of phase-modulated memory pixels. Finally, we show an example of a spatially entangled nonclassical light probe and a receiver design—constructible using a single-photon source, beam splitters, and single-photon detectors—that can in principle read any number of error-free bits of information. The probe is a single photon prepared in a uniform coherent superposition of multiple orthogonal spatial modes, i.e., a W state. The code and joint-detection receiver complexity required by a coherent-state transmitter to achieve comparable photon efficiency performance is shown to be much higher in comparison to that required by the W-state transceiver, although this advantage rapidly disappears with increasing loss in the system.
Gurkin, N. V.; Konyshev, V. A.; Nanii, O. E.; Novikov, A. G.; Treshchikov, V. N.; Ubaydullaev, R. R.
2015-01-01
We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s-1 DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on the optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 - 50 km up to a maximum length of 250 km.
Institute of Scientific and Technical Information of China (English)
张宇; 杨益新; 田丰
2014-01-01
浮标声纳受到功耗、体积和硬件复杂度等因素的限制，通常将接收数据通过无线信道发送到终端设备进行处理。由于多径传播、衰落特性以及多普勒效应等众多因素的干扰，信号在无线通信传递中会产生误码并影响最终系统性能。针对复杂传输信道环境下的浮标声纳系统，研究了误码率对系统的多目标方位估计性能的影响，并通过计算机仿真给出了误码率允许的门限。%Due to the limitations of volume,hardware complexity and power consumption,sonar buoy equipment transmits receives signals through wireless channel to the processing terminal on airplane or ship. The multipath effects,fading characteristics,and Doppler spread of communication channel will cause bit error and finally influence the performance of sonar signal processing. In this paper,is focused the DOA estimation performance of sonar buoy under various bit error rates of communication system. The BER threshold of DOA estimation is obtained via Monte Carlo simulation to guide the design of whole sonar buoy systems.
Research and implementation of the burst-mode optical signal bit-error test
Huang, Qiu-yuan; Ma, Chao; Shi, Wei; Chen, Wei
2009-08-01
On the basis of the characteristic of TDMA uplink optical signal of PON system, this article puts forward a method of high-speed optical burst bit-error rate testing based on FPGA. The article proposes a new method of generating the burst signal pattern include user-defined pattern and pseudo-random pattern, realizes the slip synchronization, self-synchronization of error detection using data decomposition technique and the traditional irrigation code synchronization technology, completes high-speed burst signal clock synchronization using the rapid synchronization technology of phase-locked loop delay in the external circuit and finishes the bit-error rate test of high-speed burst optical signal.
Continuous operation of high bit rate quantum key distribution
Dixon, A R; Yuan, Z. L.; Dynes, J. F.; Sharpe, A. W.; Shields, A. J.
2010-01-01
We demonstrate a quantum key distribution with a secure bit rate exceeding 1 Mbit/s over 50 km fiber averaged over a continuous 36-hours period. Continuous operation of high bit rates is achieved using feedback systems to control path length difference and polarization in the interferometer and the timing of the detection windows. High bit rates and continuous operation allows finite key size effects to be strongly reduced, achieving a key extraction efficiency of 96% compared to keys of infi...
Multi-bit upset aware hybrid error-correction for cache in embedded processors
Jiaqi, Dong; Keni, Qiu; Weigong, Zhang; Jing, Wang; Zhenzhen, Wang; Lihua, Ding
2015-11-01
For the processor working in the radiation environment in space, it tends to suffer from the single event effect on circuits and system failures, due to cosmic rays and high energy particle radiation. Therefore, the reliability of the processor has become an increasingly serious issue. The BCH-based error correction code can correct multi-bit errors, but it introduces large latency overhead. This paper proposes a hybrid error correction approach that combines BCH and EDAC to correct both multi-bit and single-bit errors for caches with low cost. The proposed technique can correct up to four-bit error, and correct single-bit error in one cycle. Evaluation results show that, the proposed hybrid error-correction scheme can improve the performance of cache accesses up to 20% compared to the pure BCH scheme.
Institute of Scientific and Technical Information of China (English)
李菲; 吴毅; 侯再红
2012-01-01
自由空间光通信(FSO)系统的性能由于受大气湍流影响会产生剧烈波动.根据系统和大气参数评估系统差错性能的研究具有现实意义.以大气湍流信道和光电探测两个模型为基础,建立了FSO系统差错性能的数学仿真模型,提出了湍流条件下系统误码率计算公式.对仿真结果与弱湍流条件下获得的实验数据进行了比较,并依据此模型对光强起伏和背景噪声等因素的影响进行仿真.仿真结果表明,基于该模型的仿真结果与实验数据一致,光强起伏是引起系统性能波动的主要因素,最优判决阈值需根据实际大气条件进行调整.该模型可有效评估湍流条件下FSO系统性能,并为相关理论研究提供参考.%Performance of free-space optical communication (FSO) system fluctuates greatly due to influence by atmospheric turbulence. Research about evaluating system error performance according to parameters of system and atmosphere is a subject of current interest. Based on both optical turbulence channel and photoelectric detection model, a ma thematic simulation model of error performance for FSO system is established, and an expression of bit error rate for FSO system through turbulent atmosphere is proposed. Results of simulation are compared with experimental data obtained under weak turbulence condition and the model is used to characterize factors in turbulence, such as intensity fluctuation and background noise, etc. Simulation results are shown to be consistent with experimental data, intensity fluctuation is a chief factor of system performance fluctuation, and optimized threshold should be adjusted according to pratical atmosphere. The presented model can lead to an efficient performance evaluation and provide reference to correlative theoretical researches.
Circuit and interconnect design for high bit-rate applications
Veenstra, H.
2006-01-01
This thesis presents circuit and interconnect design techniques and design flows that address the most difficult and ill-defined aspects of the design of ICs for high bit-rate applications. Bottlenecks in interconnect design, circuit design and on-chip signal distribution for high bit-rate applicati
Multi-bit soft error tolerable L1 data cache based on characteristic of data value
Institute of Scientific and Technical Information of China (English)
WANG Dang-hui; LIU He-peng; CHEN Yi-ran
2015-01-01
Due to continuous decreasing feature size and increasing device density, on-chip caches have been becoming susceptible to single event upsets, which will result in multi-bit soft errors. The increasing rate of multi-bit errors could result in high risk of data corruption and even application program crashing. Traditionally, L1 D-caches have been protected from soft errors using simple parity to detect errors, and recover errors by reading correct data from L2 cache, which will induce performance penalty. This work proposes to exploit the redundancy based on the characteristic of data values. In the case of a small data value, the replica is stored in the upper half of the word. The replica of a big data value is stored in a dedicated cache line, which will sacrifice some capacity of the data cache. Experiment results show that the reliability of L1 D-cache has been improved by 65% at the cost of 1% in performance.
Euclidean Geometry Codes, minimum weight words and decodable error-patterns using bit-flipping
DEFF Research Database (Denmark)
Høholdt, Tom; Justesen, Jørn; Jonsson, Bergtor
2005-01-01
We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns.......We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns....
Institute of Scientific and Technical Information of China (English)
徐卫林; 吴迪; 覃玉良; 韦保林; 段吉海
2014-01-01
A method of analyzing bit error rate( BER) based on channel model of on-body Ultra Wideband ( UWB) networks is proposed with consideration of various structures of RAKE receiver,the angle between transmitter's antenna and receiver's antenna and the way of signals' decision. Digital storage oscilloscope is adopted to acquire received signal of time domain and CLEAN deconvolution algorithm is used to get the impulse response of body channel in the line-of-sight condition of body surface to body surface and body surface to external. Various structures and branch number of RAKE receivers are applied to receive the time-hopping signal accompanied with additive white Gaussian noise and so BER can be analyzed. The simulation results show that absolute RAKE receiver has best BER performance while it has the most com-plex structure. For the same branch number,partial RAKE receiver's BER is nearly 3 dB degraded than se-lective RAKE's but it has lower complex structure. In order to reduce multipath components due to body re-flection and get better BER and inter-symbol interference ( ISI ) , results show that vertical angle between transmitter's antenna and receiver's antenna should be avoided. For repetition code,the BER performance of soft decision is 0. 2~0. 4 dB better than hard decision. The channel model and BER performance analy-sis provide references for UWB transceivers design and performance evaluation.%为研究不同接收机的架构、收发天线的角度和判决方式对穿戴式超宽带( UWB )信道RAKE接收机误码率( BER)的影响,利用数字抽样示波器获取时域接收信号,使用CLEAN算法去卷积,得到体表-体表和体表-体外环境下人体信道冲激响应；并采用不同结构不同分支数的RAKE接收机对含有加性高斯噪声的跳时信号进行接收,以分析其对BER的影响。仿真结果表明,包含所有分支的RAKE接收机BER最低但结构复杂；在相同分支数下,包含部分分支的RAKE接收机的BER
Bit rates in audio source coding
Veldhuis, Raymond N.J.
1992-01-01
The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a
Comprehensive Error Rate Testing (CERT)
U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...
Comodulation masking release in bit-rate reduction systems
DEFF Research Database (Denmark)
Vestergaard, Martin D.; Rasmussen, Karsten Bo; Poulsen, Torben
1999-01-01
It has been suggested that the level dependence of the upper masking slopebe utilised in perceptual models in bit-rate reduction systems. However,comodulation masking release (CMR) phenomena lead to a reduction of themasking effect when a masker and a probe signal are amplitude modulated withthe ...
Measuring verification device error rates
International Nuclear Information System (INIS)
A verification device generates a Type I (II) error when it recommends to reject (accept) a valid (false) identity claim. For a given identity, the rates or probabilities of these errors quantify random variations of the device from claim to claim. These are intra-identity variations. To some degree, these rates depend on the particular identity being challenged, and there exists a distribution of error rates characterizing inter-identity variations. However, for most security system applications we only need to know averages of this distribution. These averages are called the pooled error rates. In this paper the authors present the statistical underpinnings for the measurement of pooled Type I and Type II error rates. The authors consider a conceptual experiment, ''a crate of biased coins''. This model illustrates the effects of sampling both within trials of the same individual and among trials from different individuals. Application of this simple model to verification devices yields pooled error rate estimates and confidence limits for these estimates. A sample certification procedure for verification devices is given in the appendix
Detecting bit-flip errors in a logical qubit using stabilizer measurements.
Ristè, D; Poletto, S; Huang, M-Z; Bruno, A; Vesterinen, V; Saira, O-P; DiCarlo, L
2015-04-29
Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements.
Biometric Quantization through Detection Rate Optimized Bit Allocation
Directory of Open Access Journals (Sweden)
C. Chen
2009-01-01
Full Text Available Extracting binary strings from real-valued biometric templates is a fundamental step in many biometric template protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Previous work has been focusing on the design of optimal quantization and coding for each single feature component, yet the binary string—concatenation of all coded feature components—is not optimal. In this paper, we present a detection rate optimized bit allocation (DROBA principle, which assigns more bits to discriminative features and fewer bits to nondiscriminative features. We further propose a dynamic programming (DP approach and a greedy search (GS approach to achieve DROBA. Experiments of DROBA on the FVC2000 fingerprint database and the FRGC face database show good performances. As a universal method, DROBA is applicable to arbitrary biometric modalities, such as fingerprint texture, iris, signature, and face. DROBA will bring significant benefits not only to the template protection systems but also to the systems with fast matching requirements or constrained storage capability.
International Nuclear Information System (INIS)
This paper reports on the influence of the transmission bit rate on the performance of optical fibre communication systems employing laser diodes subjected to high-speed direct modulation. The performance is evaluated in terms of the bit error rate (BER) and power penalty associated with increasing the transmission bit rate while keeping the transmission distance. The study is based on numerical analysis of the stochastic rate equations of the laser diode and takes into account noise mechanisms in the receiver. Correlation between BER and the Q-parameter of the received signal is presented. The relative contributions of the transmitter noise and the circuit and shot noises of the receiver to BER are quantified as functions of the transmission bit rate. The results show that the power penalty at BER = 10-9 required to keep the transmission distance increases moderately with the increase in the bit rate near 1 Gbps and at high bias currents. In this regime, the shot noise is the main contributor to BER. At higher bit rates and lower bias currents, the power penalty increases remarkably, which comes mainly from laser noise induced by the pseudorandom bit-pattern effect.
Optical Switching and Bit Rates of 40 Gbit/s and above
DEFF Research Database (Denmark)
Ackaert, A.; Demester, P.; O'Mahony, M.;
2003-01-01
Optical switching in WDM networks introduces additional aspects to the choice of single channel bit rates compared to WDM transmission systems. The mutual impact of optical switching and bit rates of 40 Gbps and above is discussed....
Efficient rate control scheme for low bit rate H.264/AVC video coding
Institute of Scientific and Technical Information of China (English)
LI Zhi-cheng; ZHANG Yong-jun; LIU Tao; GU Wan-yi
2009-01-01
This article presents an efficient rate control scheme for H.264/AVC video coding in low bit rate environment. In the proposed scheme, an improved rate-distortion (RD) model by both analytical and empirical approaches is developed. It involves an enhanced mean absolute difference estimating method and a more rate-robust distortion model. Based on this RD model, an efficient macroblock-layer rate control scheme for H.264/AVC video coding is proposed. Experimental results show that this model encodes video sequences with higher peak signal-to-noise ratio gains and generates bit stream closer to the target rate.
Error tolerance of topological codes with independent bit-flip and measurement errors
Andrist, Ruben S.; Katzgraber, Helmut G.; Bombin, H.; Martin-Delgado, M. A.
2016-07-01
Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011), 10.1088/1367-2630/13/8/083006.
Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.
2015-04-01
Current approaches for building quantum computing devices focus on two-level quantum systems which nicely mimic the concept of a classical bit, albeit enhanced with additional quantum properties. However, rather than artificially limiting the number of states to two, the use of d -level quantum systems (qudits) could provide advantages for quantum information processing. Among other merits, it has recently been shown that multilevel quantum systems can offer increased stability to external disturbances. In this study we demonstrate that topological quantum memories built from qudits, also known as Abelian quantum double models, exhibit a substantially increased resilience to noise. That is, even when taking into account the multitude of errors possible for multilevel quantum systems, topological quantum error-correction codes employing qudits can sustain a larger error rate than their two-level counterparts. In particular, we find strong numerical evidence that the thresholds of these error-correction codes are given by the hashing bound. Considering the significantly increased error thresholds attained, this might well outweigh the added complexity of engineering and controlling higher-dimensional quantum systems.
Extremely Low Bit-Rate Nearest Neighbor Search Using a Set Compression Tree.
Arandjelović, Relja; Zisserman, Andrew
2014-12-01
The goal of this work is a data structure to support approximate nearest neighbor search on very large scale sets of vector descriptors. The criteria we wish to optimize are: (i) that the memory footprint of the representation should be very small (so that it fits into main memory); and (ii) that the approximation of the original vectors should be accurate. We introduce a novel encoding method, named a Set Compression Tree (SCT), that satisfies these criteria. It is able to accurately compress 1 million descriptors using only a few bits per descriptor. The large compression rate is achieved by not compressing on a per-descriptor basis, but instead by compressing the set of descriptors jointly. We describe the encoding, decoding and use for nearest neighbor search, all of which are quite straightforward to implement. The method, tested on standard benchmarks (SIFT1M and 80 Million Tiny Images), achieves superior performance to a number of state-of-the-art approaches, including Product Quantization, Locality Sensitive Hashing, Spectral Hashing, and Iterative Quantization. For example, SCT has a lower error using 5 bits than any of the other approaches, even when they use 16 or more bits per descriptor. We also include a comparison of all the above methods on the standard benchmarks. PMID:26353147
Framed bit error rate testing for 100G ethernet equipment
DEFF Research Database (Denmark)
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert;
2010-01-01
The Internet users behavioural patterns are migrating towards bandwidth-intensive applications, which require a corresponding capacity extension. The emerging 100 Gigabit Ethernet (GE) technology is a promising candidate for providing a ten-fold increase of todays available Internet transmission...
Up to 20 Gbit/s bit-rate transparent integrated interferometric wavelength converter
DEFF Research Database (Denmark)
Jørgensen, Carsten; Danielsen, Søren Lykke; Hansen, Peter Bukhave;
1996-01-01
We present a compact and optimised multiquantum-well based, integrated all-active Michelson interferometer for 26 Gbit/s optical wavelength conversion. Bit-rate transparent operation is demonstrated with a conversion penalty well below 0.5 dB at bit-rates ranging from 622 Mbit/s to 20 Gbit/s....
Application of time-hopping UWB range-bit rate performance in the UWB sensor networks
Nascimento, J.R.V. do; Nikookar, H.
2008-01-01
In this paper, the achievable range-bit rate performance is evaluated for Time-Hopping (TH) UWB networks complying with the FCC outdoor emission limits in the presence of Multiple Access Interference (MAI). Application of TH-UWB range-bit rate performance is presented for UWB sensor networks. Result
High bit rate germanium single photon detectors for 1310nm
Seamons, J. A.; Carroll, M. S.
2008-04-01
There is increasing interest in development of high speed, low noise and readily fieldable near infrared (NIR) single photon detectors. InGaAs/InP Avalanche photodiodes (APD) operated in Geiger mode (GM) are a leading choice for NIR due to their preeminence in optical networking. After-pulsing is, however, a primary challenge to operating InGaAs/InP single photon detectors at high frequencies1. After-pulsing is the effect of charge being released from traps that trigger false ("dark") counts. To overcome this problem, hold-off times between detection windows are used to allow the traps to discharge to suppress after-pulsing. The hold-off time represents, however, an upper limit on detection frequency that shows degradation beginning at frequencies of ~100 kHz in InGaAs/InP. Alternatively, germanium (Ge) single photon avalanche photodiodes (SPAD) have been reported to have more than an order of magnitude smaller charge trap densities than InGaAs/InP SPADs2, which allowed them to be successfully operated with passive quenching2 (i.e., no gated hold off times necessary), which is not possible with InGaAs/InP SPADs, indicating a much weaker dark count dependence on hold-off time consistent with fewer charge traps. Despite these encouraging results suggesting a possible higher operating frequency limit for Ge SPADs, little has been reported on Ge SPAD performance at high frequencies presumably because previous work with Ge SPADs has been discouraged by a strong demand to work at 1550 nm. NIR SPADs require cooling, which in the case of Ge SPADs dramatically reduces the quantum efficiency of the Ge at 1550 nm. Recently, however, advantages to working at 1310 nm have been suggested which combined with a need to increase quantum bit rates for quantum key distribution (QKD) motivates examination of Ge detectors performance at very high detection rates where InGaAs/InP does not perform as well. Presented in this paper are measurements of a commercially available Ge APD
Warped Discrete Cosine Transform-Based Low Bit-Rate Block Coding Using Image Downsampling
Directory of Open Access Journals (Sweden)
Ertürk Sarp
2007-01-01
Full Text Available This paper presents warped discrete cosine transform (WDCT-based low bit-rate block coding using image downsampling. While WDCT aims to improve the performance of conventional DCT by frequency warping, the WDCT has only been applicable to high bit-rate coding applications because of the overhead required to define the parameters of the warping filter. Recently, low bit-rate block coding based on image downsampling prior to block coding followed by upsampling after the decoding process is proposed to improve the compression performance for low bit-rate block coders. This paper demonstrates that a superior performance can be achieved if WDCT is used in conjunction with image downsampling-based block coding for low bit-rate applications.
Kaiser, F.; Aktas, D.; Fedrici, B; Lunghi, T.; Labonté, L.; Tanzilli, Sébastien
2016-01-01
International audience; We demonstrate an experimental method for measuring energy-time entanglement over almost 80 nm spectral bandwidth in a single shot with a quantum bit error rate below 0.5%. Our scheme is extremely cost-effective and efficient in terms of resources as it employs only one source of entangled photons and one fixed unbalanced interferometer per phase-coded analysis basis. We show that the maximum analysis spectral bandwidth is obtained when the analysis interferometers are...
Towards the generation of random bits at terahertz rates based on a chaotic semiconductor laser
Energy Technology Data Exchange (ETDEWEB)
Kanter, Ido [Minerva Center and Department of Physics, Bar-Ilan University, Ramat-Gan 52900 (Israel); Aviad, Yaara; Reidler, Igor; Cohen, Elad; Rosenbluh, Michael [Department of Physics, Jack and Pearl Resnick Institute for Advanced Technology, Bar-Ilan University, Ramat-Gan, 52900 Israel (Israel)
2010-06-01
Random bit generators (RBGs) are important in many aspects of statistical physics and crucial in Monte-Carlo simulations, stochastic modeling and quantum cryptography. The quality of a RBG is measured by the unpredictability of the bit string it produces and the speed at which the truly random bits can be generated. Deterministic algorithms generate pseudo-random numbers at high data rates as they are only limited by electronic hardware speed, but their unpredictability is limited by the very nature of their deterministic origin. It is widely accepted that the core of any true RBG must be an intrinsically non-deterministic physical process, e.g. measuring thermal noise from a resistor. Owing to low signal levels, such systems are highly susceptible to bias, introduced by amplification, and to small nonrandom external perturbations resulting in a limited generation rate, typically less than 100M bit/s. We present a physical random bit generator, based on a chaotic semiconductor laser, having delayed optical feedback, which operates reliably at rates up to 300Gbit/s. The method uses a high derivative of the digitized chaotic laser intensity and generates the random sequence by retaining a number of the least significant bits of the high derivative value. The method is insensitive to laser operational parameters and eliminates the necessity for all external constraints such as incommensurate sampling rates and laser external cavity round trip time. The randomness of long bit strings is verified by standard statistical tests.
Field trial for the mixed bit rate at 100G and beyond
Yu, Jianjun; Jia, Zhensheng; Dong, Ze; Chien, Hung-Chang
2013-01-01
Successful joint experiments with Deutsche Telecom (DT) on long-haul transmission of 100G and beyond are demonstrated over standard single mode fiber (SSMF) and inline EDFA-only amplification. The transmission link consists of 8 nodes and 950-km installed SSMF in DT's optical infrastructure with the addition of lab SSMF for extended optical reach. The first field transmission of 8×216.4-Gb/s Nyquist-WDM signals is reported over 1750- km distance with 21.6-dB average loss per span. Each channel modulated by 54.2-Gbaud PDM-CSRZ-QPSK signal is on 50-GHz grid, achieving a net spectral efficiency (SE) of 4 bit/s/Hz. We also demonstrate mixed data-rate transmission coexisting with 1T, 400G, and 100G channels. The 400G uses four independent subcarriers modulated by 28-Gbaud PDM-QPSK signals, yielding the net SE of 4 bit/s/Hz while 13 optically generated subcarriers from single optical source are employed in 1T channel with 25-Gbaud PDM-QPSK modulation. The 100G signal uses real-time coherent PDM-QPSK transponder with 15% overhead of soft-decision forward-error correction (SD-FEC). The digital post filter and 1-bit maximum likelihood sequence estimation (MLSE) are introduced at the receiver DSP to suppress noise, linear crosstalk and filtering effects. Our results show the future 400G and 1T channels utilizing Nyquist WDM technique can transmit long-haul distance with higher SE using the same QPSK format.
Theoretical Study of Quantum Bit Rate in Free-Space Quantum Cryptography
Institute of Scientific and Technical Information of China (English)
MA Jing; ZHANG Guang-Yu; TAN Li-Ying
2006-01-01
The quantum bit rate is an important operating parameter in free-space quantum key distribution. We introduce the measuring factor and the sifting factor, and present the expressions of the quantum bit rate based on the ideal single-photon sources and the single-photon sources with Poisson distribution. The quantum bit rate is studied in the numerical simulation for the laser links between a ground station and a satellite in a low earth orbit. The results show that it is feasible to implement quantum key distribution between a ground station and a satellite in a low earth orbit.
Room temperature single-photon detectors for high bit rate quantum key distribution
Energy Technology Data Exchange (ETDEWEB)
Comandar, L. C.; Patel, K. A. [Toshiba Research Europe Ltd., 208 Cambridge Science Park, Milton Road, Cambridge CB4 0GZ (United Kingdom); Engineering Department, Cambridge University, 9 J J Thomson Ave., Cambridge CB3 0FA (United Kingdom); Fröhlich, B., E-mail: bernd.frohlich@crl.toshiba.co.uk; Lucamarini, M.; Sharpe, A. W.; Dynes, J. F.; Yuan, Z. L.; Shields, A. J. [Toshiba Research Europe Ltd., 208 Cambridge Science Park, Milton Road, Cambridge CB4 0GZ (United Kingdom); Penty, R. V. [Engineering Department, Cambridge University, 9 J J Thomson Ave., Cambridge CB3 0FA (United Kingdom)
2014-01-13
We report room temperature operation of telecom wavelength single-photon detectors for high bit rate quantum key distribution (QKD). Room temperature operation is achieved using InGaAs avalanche photodiodes integrated with electronics based on the self-differencing technique that increases avalanche discrimination sensitivity. Despite using room temperature detectors, we demonstrate QKD with record secure bit rates over a range of fiber lengths (e.g., 1.26 Mbit/s over 50 km). Furthermore, our results indicate that operating the detectors at room temperature increases the secure bit rate for short distances.
Entropy rates of low-significance bits sampled from chaotic physical systems
Corron, Ned J.; Cooper, Roy M.; Blakely, Jonathan N.
2016-10-01
We examine the entropy of low-significance bits in analog-to-digital measurements of chaotic dynamical systems. We find the partition of measurement space corresponding to low-significance bits has a corrugated structure. Using simulated measurements of a map and experimental data from a circuit, we identify two consequences of this corrugated partition. First, entropy rates for sequences of low-significance bits more closely approach the metric entropy of the chaotic system, because the corrugated partition better approximates a generating partition. Second, accurate estimation of the entropy rate using low-significance bits requires long block lengths as the corrugated partition introduces more long-term correlation, and using only short block lengths overestimates the entropy rate. This second phenomenon may explain recent reports of experimental systems producing binary sequences that pass statistical tests of randomness at rates that may be significantly beyond the metric entropy rate of the physical source.
An Experimentally Validated SOA Model for High-Bit Rate System Applications
Institute of Scientific and Technical Information of China (English)
Hasan; I.; Saleheen
2003-01-01
A comprehensive model of the Semiconductor Optical Amplifier with experimental validation result is presented. This model accounts for various physical behavior of the device which is necessary for high bit-rate system application.
Context-Adaptive Arithmetic Coding Scheme for Lossless Bit Rate Reduction of MPEG Surround in USAC
Yoon, Sungyong; Pang, Hee-Suk; Sung, Koeng-Mo
We propose a new coding scheme for lossless bit rate reduction of the MPEG Surround module in unified speech and audio coding (USAC). The proposed scheme is based on context-adaptive arithmetic coding for efficient bit stream composition of spatial parameters. Experiments show that it achieves the significant lossless bit reduction of 9.93% to 12.14% for spatial parameters and 8.64% to 8.96% for the overall MPEG Surround bit streams compared to the original scheme. The proposed scheme, which is not currently included in USAC, can be used for the improved coding efficiency of MPEG Surround in USAC, where the saved bits can be utilized by the other modules in USAC.
Re-use of Low Bandwidth Equipment for High Bit Rate Transmission Using Signal Slicing Technique
DEFF Research Database (Denmark)
Wagner, Christoph; Spolitis, S.; Vegas Olmos, Juan José;
: Massive fiber-to-the-home network deployment requires never ending equipment upgrades operating at higher bandwidth. We show effective signal slicing method, which can reuse low bandwidth opto-electronical components for optical communications at higher bit rates.......: Massive fiber-to-the-home network deployment requires never ending equipment upgrades operating at higher bandwidth. We show effective signal slicing method, which can reuse low bandwidth opto-electronical components for optical communications at higher bit rates....
A novel dynamic frame rate control algorithm for H.264 low-bit-rate video coding
Institute of Scientific and Technical Information of China (English)
Yang Jing; Fang Xiangzhong
2007-01-01
The goal of this paper is to improve human visual perceptual quality as well as coding efficiency of H.264 video at low bit rate conditions by adaptively adjusting the number of skipped frames. The encoding frames ale selected according to the motion activity of each frame and the motion accumulation of successive frames. The motion activity analysis is based on the statistics of motion vectors and with consideration of the characteristics of H. 264 coding standard. A prediction model of motion accumulation is proposed to reduce complex computation of motion estimation. The dynamic encoding frame rate control algorithm is applied to both the frame level and the GOB (Group of Macroblocks) level. Simulation is done to compare the performance of JM76 with the proposed frame level scheme and GOB level scheme.
Optimization of Weight on Bit During Drilling Operation Based on Rate of Penetration Model
Directory of Open Access Journals (Sweden)
Sonny Irawan
2012-06-01
Full Text Available Drilling optimization is very important during drilling operation. This is because it could save time and cost of operation thus increases the profit. Drilling optimization aims to optimize controllable variables during drilling operation such as weight on bit and bit rotation speed for obtaining maximum drilling rate. In this project, Bourgoyne and Young ROP model had been selected to study the effects of several parameters during drilling operation. Important parameters such as depth, pore pressure, equivalent circulating density, bit weight, rotary speed, bit tooth wear and jet impact force are extracted from a final drilling report. In order to study their relationship statistical method which is multiple regressions analysis has been used. The penetration model for the field is constructed using the results from statistical method. In the end, the result from analysis is used to determine optimum values of weight on bit that give optimum drilling operation. Overall, this project provides a study of the most complete mathematical model for rate of penetration that was constructed by Bourgoyne and Young. From the research the constants that represented several drilling variables had been determined. The rate of penetration for the field had been predicted based on constants for every data vs. Depth. Finally, optimized weight on bit had been calculated for several data points. In the end, drilling simulator (Drill-Sim 500 was used to prove the results based on actual field data.
Achieving high bit rate logical stochastic resonance in a bistable system by adjusting parameters
Yang, Ding-Xin; Gu, Feng-Shou; Feng, Guo-Jin; Yang, Yong-Min; Ball, Andrew
2015-11-01
The phenomenon of logical stochastic resonance (LSR) in a nonlinear bistable system is demonstrated by numerical simulations and experiments. However, the bit rates of the logical signals are relatively low and not suitable for practical applications. First, we examine the responses of the bistable system with fixed parameters to different bit rate logic input signals, showing that an arbitrary high bit rate LSR in a bistable system cannot be achieved. Then, a normalized transform of the LSR bistable system is introduced through a kind of variable substitution. Based on the transform, it is found that LSR for arbitrary high bit rate logic signals in a bistable system can be achieved by adjusting the parameters of the system, setting bias value and amplifying the amplitudes of logic input signals and noise properly. Finally, the desired OR and AND logic outputs to high bit rate logic inputs in a bistable system are obtained by numerical simulations. The study might provide higher feasibility of LSR in practical engineering applications. Project supported by the National Natural Science Foundation of China (Grant No. 51379526).
Achieving high bit rate logical stochastic resonance in a bistable system by adjusting parameters
Institute of Scientific and Technical Information of China (English)
杨定新; 谷丰收; 冯国金; 杨拥民
2015-01-01
The phenomenon of logical stochastic resonance (LSR) in a nonlinear bistable system is demonstrated by numerical simulations and experiments. However, the bit rates of the logical signals are relatively low and not suitable for practical applications. First, we examine the responses of the bistable system with fixed parameters to different bit rate logic input signals, showing that an arbitrary high bit rate LSR in a bistable system cannot be achieved. Then, a normalized transform of the LSR bistable system is introduced through a kind of variable substitution. Based on the transform, it is found that LSR for arbitrary high bit rate logic signals in a bistable system can be achieved by adjusting the parameters of the system, setting bias value and amplifying the amplitudes of logic input signals and noise properly. Finally, the desired OR and AND logic outputs to high bit rate logic inputs in a bistable system are obtained by numerical simulations. The study might provide higher feasibility of LSR in practical engineering applications.
Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems.
Huang, Shih-Chia; Chen, Bo-Hao
2013-12-01
Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76
Error-rate performance analysis of opportunistic regenerative relaying
Tourki, Kamel
2011-09-01
In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.
Generating Key Streams in infrastructure WLAN using bit rate
Directory of Open Access Journals (Sweden)
R.Buvaneswari,
2010-12-01
Full Text Available Due to the rapid growth of wireless networking, the fallible security issues of the 802.11 standard have come under close scrutiny. There are serious security issues that need to be sorted out before everyone is willing to transmit valuable corporate information on a wireless network. This report focuses on inherent flaws in wired equivalent privacy protocol (WEPused by the 802.11 standard, Temporal key Integrity protocol(TKIPwhich is considered an interim solution to legacy 802.11 equipment. Counter Mode /CBCMAC protocol which is based on Advanced Encryption Standard (AES will not work on many of the current shipping cards which are based on 802.11b/g.This paper proposes an enhancement to TKIP in accordance with transmission rate supported by physical Layer Convergence Protocol(PCLP and showsenhanced pattern of key streams generated from TKIP in order to avoid key reuse during the time of encryption and decryption of pay load.
Adaptive Bit Rate Video Streaming Through an RF/Free Space Optical Laser Link
Directory of Open Access Journals (Sweden)
A. Akbulut
2010-06-01
Full Text Available This paper presents a channel-adaptive video streaming scheme which adjusts video bit rate according to channel conditions and transmits video through a hybrid RF/free space optical (FSO laser communication system. The design criteria of the FSO link for video transmission to 2.9 km distance have been given and adaptive bit rate video streaming according to the varying channel state over this link has been studied. It has been shown that the proposed structure is suitable for uninterrupted transmission of videos over the hybrid wireless network with reduced packet delays and losses even when the received power is decreased due to weather conditions.
An Improved Frame-Layer Bit Allocation Scheme for H.264/AVC Rate Control
Institute of Scientific and Technical Information of China (English)
LIN Gui-xu; ZHENG Shi-bao; ZHU Liang-jia
2009-01-01
In this paper, we aim at improving the video quality degradation due to high motions or scene changes. An improved frame-layer bit allocation scheme for H.264/AVC rate control is proposed. First, current frame is pre-encoded in 16×16 modes with a fixed quantization parameter (QP). The frame coding complexity is then measured based on the resulting bits and peak signal-to-ratio (PSNR) in the pre-coding stage. Finally, a bit budgetis calculated for current frame according to its coding complexity and inter-frame PSNR fluctuation, combined with the buffer status. Simulation results show that, in comparison with the H.264 adopted rate control scheme, our method is more efficient to suppress the sharp PSNR drops caused by high motions and scene changes. The visual quality variations in a sequence are also relieved.
A 14-bit 200-MS/s time-interleaved ADC with sample-time error calibration
Institute of Scientific and Technical Information of China (English)
Zhang Yiwen; Chen Chixiao; Yu Bei; Ye Fan; Ren Junyan
2012-01-01
Sample-time error between channels degrades the resolution of time-interleaved analog-to-digital converters (TIADCs).A calibration method implemented in mixed circuits with low complexity and fast convergence is proposed in this paper.The algorithm for detecting sample-time error is based on correlation and widely applied to wide-sense stationary input signals.The detected sample-time error is corrected by a voltage-controlled sampling switch.The experimental result of a 2-channel 200-MS/s 14-bit TIADC shows that the signal-to-noise and distortion ratio improves by 19.1 dB,and the spurious-free dynamic range improves by 34.6 dB for a 70.12-MHz input after calibration.The calibration convergence time is about 20000 sampling intervals.
DESIGN ISSUES FOR BIT RATE-ADAPTIVE 3R O/E/OTRANSPONDER IN INTELLIGENT OPTICAL NETWORKS
Institute of Scientific and Technical Information of China (English)
朱栩; 曾庆济; 杨旭东; 刘逢清; 肖石林
2002-01-01
This paper reported the design and implementation of a bit rate-adaptive Optical-Electronic-Optical (O/E/O) transponder accomplishing almost full data rate transparency up to 2.5 Gb/s with 3R (Reamplifying, Reshaping and Retiming) processing in electronic domain. Based on the chipsets performing clock recovery in several continuous bit rate ranges, a clock and data regenerating circuit self-adaptive to the bit rate of input signal was developed. Key design issues were presented, laying stress on the functional building blocks and scheme for the bit rate-adaptive retiming circuit. The experimental results show a good scalability performance.
Influence of the FEC Channel Coding on Error Rates and Picture Quality in DVB Baseband Transmission
Directory of Open Access Journals (Sweden)
T. Kratochvil
2006-09-01
Full Text Available The paper deals with the component analysis of DTV (Digital Television and DVB (Digital Video Broadcasting baseband channel coding. Used FEC (Forward Error Correction error-protection codes principles are shortly outlined and the simulation model applied in Matlab is presented. Results of achieved bit and symbol error rates and corresponding picture quality evaluation analysis are presented, including the evaluation of influence of the channel coding on transmitted RGB images and their noise rates related to MOS (Mean Opinion Score. Conclusion of the paper contains comparison of DVB channel codes efficiency.
Institute of Scientific and Technical Information of China (English)
Dong Jian-Ji; Zhang Xin-Liang; Huang De-Xiu
2008-01-01
This paper proposes and simulates a novel all-optical error-bit amplitude monitor based on cross-gain modulation and four-wave mixing in cascaded semiconductor optical amplifiers (SOAs),which function as logic NOT and logic AND,respectively.The proposed scheme is successfully simulated for 40 Gb/s return-to-zero (RZ) signal with different duty cycles.In the first stage,the SOA is followed by a detuning filter to accelerate the gain recovery as well as improve the extinction ratio.A clock probe signal is used to avoid the edge pulse-pairs in the output waveform.Among these RZ formats,33% RZ format is preferred to obtain the largest eye opening.The normalized error amplitude,defined as error bit amplitude over the standard mark amplitude,has a dynamic range from 0.1 to 0.65 for all RZ formats.The simulations show small input power dynamic range because of the nonlinear gain variation in the first stage.This scheme is competent for nonreturn-to-zero format at 10Gb/s as well.
Soury, Hamza
2012-06-01
This letter considers the average bit error probability of binary coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closed form expression in terms of the Fox\\'s H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading and Nakagami-m fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters. © 2012 IEEE.
Directory of Open Access Journals (Sweden)
S. Chris Prema
2015-01-01
Full Text Available A rate request sequenced bit loading reallocation algorithm is proposed. The spectral holes detected by spectrum sensing (SS in cognitive radio (CR are used by secondary users. This algorithm is applicable to Discrete Multitone (DMT systems for secondary user reallocation. DMT systems support different modulation on different subchannels according to Signal-to-Noise Ratio (SNR. The maximum bits and power that can be allocated to each subband is determined depending on the channel state information (CSI and secondary user modulation scheme. The spectral holes or free subbands are allocated to secondary users depending on the user rate request and subchannel capacity. A comparison is done between random rate request and sequenced rate request of secondary user for subchannel allocation. Through simulations it is observed that with sequenced rate request higher spectral efficiency is achieved with reduced complexity.
Multiple Bit Error Tolerant Galois Field Architectures Over GF (2m
Directory of Open Access Journals (Sweden)
Mahesh Poolakkaparambil
2012-06-01
Full Text Available Radiation induced transient faults like single event upsets (SEU and multiple event upsets (MEU in memories are well researched. As a result of the technology scaling, it is observed that the logic blocks are also vulnerable to malfunctioning when they are deployed in radiation prone environment. However, the current literature is lacking efforts to mitigate such issues in the digital logic circuits when exposed to natural radiation prone environment or when they are subjected to malicious attacks by an eavesdropper using highly energized particles. This may lead to catastrophe in critical applications such as widely used cryptographic hardware. In this paper, novel dynamic error correction architectures, based on the BCH codes, is proposed for correcting multiple errors which makes the circuits robust against radiation induced faults irrespective of the location of the errors. As a benchmark test case, the finite field multiplier circuit is considered as the functional block which can be the target for major attacks. The proposed scheme has the capability to handle stuck-at faults that are also a major cause of failure affecting the overall yield of a nano-CMOS integrated chip. The experimental results show that the proposed dynamic error detection and correction architecture results in 50% reduction in critical path delay by dynamically bypassing the error correction logic when no error is present. The area overhead for the larger multiplier is within 150% which is 33% lower than the TMR and comparable to 130% overhead of single error correcting Hamming and LDPC based techniques.
Automatic network-adaptive ultra-low-bit-rate video coding
Chien, Wei-Jung; Lam, Tuyet-Trang; Abousleman, Glen P.; Karam, Lina J.
2006-05-01
This paper presents a software-only, real-time video coder/decoder (codec) for use with low-bandwidth channels where the bandwidth is unknown or varies with time. The codec incorporates a modified JPEG2000 core and interframe predictive coding, and can operate with network bandwidths of less than 1 kbits/second. The encoder and decoder establish two virtual connections over a single IP-based communications link. The first connection is UDP/IP guaranteed throughput, which is used to transmit the compressed video stream in real time, while the second is TCP/IP guaranteed delivery, which is used for two-way control and compression parameter updating. The TCP/IP link serves as a virtual feedback channel and enables the decoder to instruct the encoder to throttle back the transmission bit rate in response to the measured packet loss ratio. It also enables either side to initiate on-the-fly parameter updates such as bit rate, frame rate, frame size, and correlation parameter, among others. The codec also incorporates frame-rate throttling whereby the number of frames decoded is adjusted based upon the available processing resources. Thus, the proposed codec is capable of automatically adjusting the transmission bit rate and decoding frame rate to adapt to any network scenario. Video coding results for a variety of network bandwidths and configurations are presented to illustrate the vast capabilities of the proposed video coding system.
FPGA Based Test Module for Error Bit Evaluation in Serial Links
J. Kolouch
2006-01-01
A test module for serial links is described. In the link transmitter, one module generates pseudorandom pulse signal that is transmitted by the link. Second module located in the link receiver generates the same signal and compares it to the received signal. Errors caused by the signal transmission can be then detected and results sent to a master computer for further processing like statistical evaluation. The module can be used for long-term error monitoring without need for human operator ...
Kartiwa, Iwa; Jung, Sang-Min; Hong, Moon-Ki; Han, Sang-Kook
2013-06-01
We experimentally demonstrate the use of millimeter-wave signal generation by optical carrier suppression (OCS) method using single-drive Mach-Zehnder modulator as a light sources seed for 20 Gb/s WDM-OFDM-PON in 20-km single fiber loopback transmission based on cost-effective RSOA modulation. Practical discrete rate adaptive bit loading algorithm was employed in this colorless ONU system to maximize the achievable bit rate for an average bit error rate (BER) below 2 × 10-3.
Extending the lifetime of a quantum bit with error correction in superconducting circuits
Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S. M.; Jiang, L.; Mirrahimi, Mazyar; Devoret, M. H.; Schoelkopf, R. J.
2016-08-01
Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The ‘break-even’ point of QEC—at which the lifetime of a qubit exceeds the lifetime of the constituents of the system—has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0>f and |1>f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.
FPGA Based Test Module for Error Bit Evaluation in Serial Links
Directory of Open Access Journals (Sweden)
J. Kolouch
2006-04-01
Full Text Available A test module for serial links is described. In the link transmitter, one module generates pseudorandom pulse signal that is transmitted by the link. Second module located in the link receiver generates the same signal and compares it to the received signal. Errors caused by the signal transmission can be then detected and results sent to a master computer for further processing like statistical evaluation. The module can be used for long-term error monitoring without need for human operator presence.
On the average capacity and bit error probability of wireless communication systems
Yilmaz, Ferkan
2011-12-01
Analysis of the average binary error probabilities and average capacity of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function-based unified expression for both average binary error probabilities and average capacity of single and multiple link communication with maximal ratio combining. It is a matter to note that the generic unified expression offered in this paper can be easily calculated and that is applicable to a wide variety of fading scenarios, and the mathematical formalism is illustrated with the generalized Gamma fading distribution in order to validate the correctness of our newly derived results. © 2011 IEEE.
Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates
Linares, Irving (Inventor)
2016-01-01
Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.
All-Optical Clock Recovery from NRZ-DPSK Signals at Flexible Bit Rates
Institute of Scientific and Technical Information of China (English)
YU Yu; ZHANG Xin-Liang; DONG Jian-Ji; HUANG De-Xiu
2008-01-01
We propose and demonstrate all-optical clock recovery (CR) from nonreturn-to-zero differential phase-shift-keying (NRZ-DPSK) signals at different bit rates theoretically and experimentally.By pre-processing with a single optical filter,clock component can be enhanced significantly and thus clock signal can be extracted from the preprocessed signals,by cascading a CR unit with a semiconductor optical amplifier based fibre ring laser.Compared with the previous preprocessing schemes,the single filter is simple and suitable for different bit rates.The clock signals can be achieved with extinction ratio over 10 dB and rms timing fitter of 0.86 and 0.9 at 10 and 20 Gb/s,respectively.The output performances related to the bandwidth and the detuning of the filter are analysed.By simply using a filter with larger bandwidth,much higher operation can be achieved easily.
Error Rate Reduction of Super-Resolution Near-Field Structure Disc
Kim, Jooho; Bae, Jaecheol; Hwang, Inoh; Lee, Jinkyung; Park, Hyunsoo; Chung, Chongsam; Kim, Hyunki; Park, Insik; Tominaga, Junji
2007-06-01
We report the error rate improvement of super-resolution near-field structure (super-RENS) write-once read-many (WORM) and read-only-memory (ROM) discs in a blue laser optical system [laser wavelength (λ), 405 nm; numerical aperture (NA), 0.85]. We prepared samples of higher carrier level WORM discs and wider pit width ROM discs. Using controlled equalization (EQ) characteristics and an adaptive write strategy and an advanced adaptive partial response maximum likelihood (PRML) technique, we obtained a bit error rate (bER) of 10-4 level. This result shows the high feasibility of super-RENS technology for practical use.
Very Low Bit-Rate Video Coding Using Motion ompensated 3-D Wavelet Transform
Institute of Scientific and Technical Information of China (English)
无
1999-01-01
A new motion-compensated 3-D wavelet transform (MC-3DWT) video coding scheme is presented in thispaper. The new coding scheme has a good performance in average PSNR, compression ratio and visual quality of re-constructions compared with the existing 3-D wavelet transform (3DWT) coding methods and motion-compensated2-D wavelet transform (MC-WT) coding method. The new MC-3DWT coding scheme is suitable for very low bit-rate video coding.
Pei, Soo-Chang; Guo, Jing-Ming
2006-06-01
In this paper, a high-capacity data hiding is proposed for embedding a large amount of information into halftone images. The embedded watermark can be distributed into several error-diffused images with the proposed minimal-error bit-searching technique (MEBS). The method can also be generalized to self-decoding mode with dot diffusion or color halftone images. From the experiments, the embedded capacity from 33% up to 50% and good quality results are achieved. Furthermore, the proposed MEBS method is also extended for robust watermarking against the degradation from printing-and-scanning and several kinds of distortions. Finally, a least-mean square-based halftoning is developed to produce an edge-enhanced halftone image, and the technique also cooperates with MEBS for all the applications described above, including high-capacity data hiding with secret sharing or self-decoding mode, as well as robust watermarking. The results prove much sharper than the error diffusion or dot diffusion methods.
Yilmaz, Ferkan
2014-04-01
The main idea in the moment generating function (MGF) approach is to alternatively express the conditional bit error probability (BEP) in a desired exponential form so that possibly multi-fold performance averaging is readily converted into a computationally efficient single-fold averaging - sometimes into a closed-form - by means of using the MGF of the signal-to-noise ratio. However, as presented in [1] and specifically indicated in [2] and also to the best of our knowledge, there does not exist an MGF-based approach in the literature to represent Wojnar\\'s generic BEP expression in a desired exponential form. This paper presents novel MGF-based expressions for calculating the average BEP of binary signalling over generalized fading channels, specifically by expressing Wojnar\\'s generic BEP expression in a desirable exponential form. We also propose MGF-based expressions to explore the amount of dispersion in the BEP for binary signalling over generalized fading channels.
Low Bit-Rate Image Compression using Adaptive Down-Sampling technique
Directory of Open Access Journals (Sweden)
V.Swathi
2011-09-01
Full Text Available In this paper, we are going to use a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass pre-filtering. The resulting down-sampled pre-filtered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then up-converts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass pre-filtering. The proposed compression approach of collaborative adaptive down-sampling and up-conversion (CADU outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that over-sampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.
Monitoring Error Rates In Illumina Sequencing
Manley, Leigh J.; Ma, Duanduan; Levine, Stuart S.
2016-01-01
Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR’s unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted. PMID:27672352
Power consumption analysis of constant bit rate data transmission over 3G mobile wireless networks
DEFF Research Database (Denmark)
Wang, Le; Ukhanova, Ann; Belyaev, Evgeny
2011-01-01
This paper presents the analysis of the power consumption of data transmission with constant bit rate over 3G mobile wireless networks. Our work includes the description of the transition state machine in 3G networks, followed by the detailed energy consumption analysis and measurement results of...... the radio link power consumption. Based on these description and analysis, we propose power consumption model. The power model was evaluated on the smartphone Nokia N900, which follows a 3GPP Release 5 and 6 supporting HSDPA/HSPA data bearers. Further we propose method of parameters selection for 3GPP...
Bit Rate Maximising Per-Tone Equalisation with Adaptive Implementation for DMT-Based Systems
Directory of Open Access Journals (Sweden)
Suchada Sitjongsataporn
2009-01-01
Full Text Available We present a bit rate maximising per-tone equalisation (BM-PTEQ cost function that is based on an exact subchannel SNR as a function of per-tone equaliser in discrete multitone (DMT systems. We then introduce the proposed BM-PTEQ criterion whose derivation for solution is shown to inherit from the methodology of the existing bit rate maximising time-domain equalisation (BM-TEQ. By solving a nonlinear BM-PTEQ cost function, an adaptive BM-PTEQ approach based on a recursive Levenberg-Marquardt (RLM algorithm is presented with the adaptive inverse square-root (iQR algorithm for DMT-based systems. Simulation results confirm that the performance of the proposed adaptive iQR RLM-based BM-PTEQ converges close to the performance of the proposed BM-PTEQ. Moreover, the performance of both these proposed BM-PTEQ algorithms is improved as compared with the BM-TEQ.
Efficient Region-of-Interest Scalable Video Coding with Adaptive Bit-Rate Control
Directory of Open Access Journals (Sweden)
Dan Grois
2013-01-01
Full Text Available This work relates to the regions-of-interest (ROI coding that is a desirable feature in future applications based on the scalable video coding, which is an extension of the H.264/MPEG-4 AVC standard. Due to the dramatic technological progress, there is a plurality of heterogeneous devices, which can be used for viewing a variety of video content. Devices such as smartphones and tablets are mostly resource-limited devices, which make it difficult to display high-quality content. Usually, the displayed video content contains one or more ROI(s, which should be adaptively selected from the preencoded scalable video bitstream. Thus, an efficient scalable ROI video coding scheme is proposed in this work, thereby enabling the extraction of the desired regions-of-interest and the adaptive setting of the desirable ROI location, size, and resolution. In addition, an adaptive bit-rate control is provided for the region-of-interest scalable video coding. The performance of the presented techniques is demonstrated and compared with the joint scalable video model reference software (JSVM 9.19, thereby showing significant bit-rate savings as a tradeoff for the relatively low PSNR degradation.
Logical error rate in the Pauli twirling approximation.
Katabarwa, Amara; Geller, Michael R
2015-09-30
The performance of error correction protocols are necessary for understanding the operation of potential quantum computers, but this requires physical error models that can be simulated efficiently with classical computers. The Gottesmann-Knill theorem guarantees a class of such error models. Of these, one of the simplest is the Pauli twirling approximation (PTA), which is obtained by twirling an arbitrary completely positive error channel over the Pauli basis, resulting in a Pauli channel. In this work, we test the PTA's accuracy at predicting the logical error rate by simulating the 5-qubit code using a 9-qubit circuit with realistic decoherence and unitary gate errors. We find evidence for good agreement with exact simulation, with the PTA overestimating the logical error rate by a factor of 2 to 3. Our results suggest that the PTA is a reliable predictor of the logical error rate, at least for low-distance codes.
Power consumption analysis of constant bit rate video transmission over 3G networks
DEFF Research Database (Denmark)
Ukhanova, Ann; Belyaev, Evgeny; Wang, Le;
2012-01-01
This paper presents an analysis of the power consumption of video data transmission with constant bit rate over 3G mobile wireless networks. The work includes the description of the radio resource control transition state machine in 3G networks, followed by a detailed power consumption analysis...... and measurements of the radio link power consumption. Based on this description and analysis, we propose our power consumption model. The power model was evaluated on a smartphone Nokia N900, which follows 3GPP Release 5 and 6 supporting HSDPA/HSUPA data bearers. We also propose a method for parameter selection...... for the 3GPP transition state machine that allows to decrease power consumption on a mobile device taking signaling traffic, buffer size and latency restrictions into account. Furthermore, we discuss the gain in power consumption vs. PSNR for transmitted video and show the possibility of performing power...
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
Directory of Open Access Journals (Sweden)
Peter McInerney
2014-01-01
Full Text Available As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase.
McInerney, Peter; Adams, Paul; Hadi, Masood Z
2014-01-01
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572
Scalable In-Band Optical Notch-Filter Labeling for Ultrahigh Bit Rate Optical Packet Switching
DEFF Research Database (Denmark)
Medhin, Ashenafi Kiros; Galili, Michael; Oxenløwe, Leif Katsuo
2014-01-01
We propose a scalable in-band optical notch-filter labeling scheme for optical packet switching of high-bit-rate data packets. A detailed characterization of the notch-filter labeling scheme and its effect on the quality of the data packet is carried out in simulation and verified by experimental...... demonstrations. The scheme is able to generate more than 91 different labels that can be applied to 640-Gb/s optical time division multiplexed packets causing an eye opening penalty of $1.2-dB. Experimental demonstration shows that up to 256 packets can be uniquely labeled by employing up to eight notch filters...... with only 0.9-dB power penalty to achieve BER of 1E-9. Using the proposed labeling scheme, optical packet switching of 640 Gb/s data packets is experimentally demonstrated in which two data packets are labeled by making none and one spectral hole using a notch filter and are switched using a LiNbO$_3...
DEFF Research Database (Denmark)
Vaa, Michael; Mikkelsen, Benny; Jepsen, Kim Stokholm;
1996-01-01
A novel bit-rate flexible and very power efficient all-optical demultiplexer using differential optical control of a monolithically integrated Michelson interferometer with MQW SOAs is demonstrated at 40 to 10 Gbit/s. Gain switched DFB lasers provide ultra stable data and control signals....
DEFF Research Database (Denmark)
Diez, S.; Mecozzi, A.; Mørk, Jesper
1999-01-01
We investigate the saturation properties of four-wave mixing of short optical pulses in a semiconductor optical amplifier. By varying the gain of the optical amplifier, we find a strong dependence of both conversion efficiency and signal-to-background ratio on pulse width and bit rate...
Error-rate performance analysis of incremental decode-and-forward opportunistic relaying
Tourki, Kamel
2011-06-01
In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.
Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael
2010-01-01
We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.
Adaptive Power and Bit Allocation in Multicarrier Systems
Institute of Scientific and Technical Information of China (English)
HUO Yong-qing; PENG Qi-cong; SHAO Huai-zong
2007-01-01
We present two adaptive power and bit allocation algorithms for multicarrier systems in a frequency selective fading environment. One algorithm allocates bit based on maximizing the channel capacity, another allocates bit based on minimizing the bit-error-rate(BER). Two algorithms allocate power based on minimizing the BER. Results show that the proposed algorithms are more effective than Fischer's algorithm at low average signal-to-noise ration (SNR). This indicates that our algorithms can achieve high spectral efficiency and high communication reliability during bad channel state. Results also denote the bit and power allocation of each algorithm and effects of the number of subcarriers on the BER performance.
Loyka, Sergey; Gagnon, Francois
2009-01-01
Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...
Institute of Scientific and Technical Information of China (English)
贾徽徽; 王潮; 顾健; 陆臻
2016-01-01
The existing error bit in the side channel attacks of ECC is difficult to avoid, and can’t be modiifed quickly. In this paper, a new search algorithm based on the Grover quantum search algorithm is proposed, which combines the Grover quantum search algorithm and the meet in the middle attack, and applies it to the side channel attack for ECC. The algorithm can solve the key problem ofn which hasM error bit inO steps. Compared with classical search algorithm, the computational complexity is greatly reduced. The analysis said that the success rate of modifying ECC attack error bit is 1, and the algorithm can effectively reduce the computational complexity.%在现有的针对ECC的侧信道攻击中，密钥出现错误bit难以避免，且无法快速修正。文章将Grover量子搜索算法和中间相遇攻击相结合，提出了一种新的搜索算法——Grover量子中间相遇搜索算法，并将其应用于针对ECC的侧信道攻击中。该算法可以在O规模为N且存在M个错误bit的密钥，与传统搜索算法的计算复杂度O(N M+1)相比较，计算复杂度大幅度降低。通过对算法进行分析表明，该方法能够以成功率1修正ECC攻击中出现的错误bit。
Error Growth Rate in the MM5 Model
Ivanov, S.; Palamarchuk, J.
2006-12-01
The goal of this work is to estimate model error growth rates in simulations of the atmospheric circulation by the MM5 model all the way from the short range to the medium range and beyond. The major topics are addressed to: (i) search the optimal set of parameterization schemes; (ii) evaluate the spatial structure and scales of the model error for various atmospheric fields; (iii) determine geographical regions where model errors are largest; (iv) define particular atmospheric patterns contributing to the fast and significant model error growth. Results are presented for geopotential, temperature, relative humidity and horizontal wind components fields on standard surfaces over the Atlantic-European region during winter 2002. Various combinations of parameterization schemes for cumulus, PBL, moisture and radiation are used to identify which one provides a lesser difference between the model state and analysis. The comparison of the model fields is carried out versus ERA-40 reanalysis of the ECMWF. Results show that the rate, at which the model error grows as well as its magnitude, varies depending on the forecast range, atmospheric variable and level. The typical spatial scale and structure of the model error also depends on the particular atmospheric variable. The distribution of the model error over the domain can be separated in two parts: the steady and transient. The first part is associated with a few high mountain regions including Greenland, where model error is larger. The transient model error mainly moves along with areas of high gradients in the atmospheric flow. Acknowledgement: This study has been supported by NATO Science for Peace grant #981044. The MM5 modelling system used in this study has been provided by UCAR. ERA-40 re-analysis data have been obtained from the ECMWF data server.
Critique of recent models for human error rate assessment
Energy Technology Data Exchange (ETDEWEB)
Apostolakis, G.E.; Bier, V.M.; Mosleh, A.
1988-01-01
This paper critically reviews two groups of models for assessing human error rates under accident conditions. The first group, which includes the US Nuclear Regulatory Commission (NRC) handbook model and the human cognitive reliability (HCR) model, considers as fundamental the time that is available to the operators to act. The second group, which is represented by the success likelihood index methodology multiattribute utility decomposition (SLIM-MAUD) model, relies on ratings of the human actions with respect to certain qualitative factors and the subsequent derivation of error rates. These models are evaluated with respect to two criteria: the treatment of uncertainties and the internal coherence of the models. In other words, this evaluation focuses primarily on normative aspects of these models. The principal findings are as follows: (1) Both of the time-related models provide human error rates as a function of the available time for action and the prevailing conditions. However, the HCR model ignores the important issue of state-of-knowledge uncertainties, dealing exclusively with stochastic uncertainty, whereas the model presented in the NRC handbook handles both types of uncertainty. (2) SLIM-MAUD provides a highly structured approach for the derivation of human error rates under given conditions. However, the treatment of the weights and ratings in this model is internally inconsistent.
Evaluation of soft errors rate in a commercial memory EEPROM
International Nuclear Information System (INIS)
Soft errors are transient circuit errors caused by external radiation. When an ion intercepts a p-n region in an electronic component, the ionization produces excess charges along the track. These charges when collected can flip internal values, especially in memory cells. The problem affects not only space application but also terrestrial ones. Neutrons induced by cosmic rays and alpha particles, emitted from traces of radioactive contaminants contained in packaging and chip materials, are the predominant sources of radiation. The soft error susceptibility is different for different memory technology hence the experimental study are very important for Soft Error Rate (SER) evaluation. In this work, the methodology for accelerated tests is presented with the results for SER in a commercial electrically erasable and programmable read-only memory (EEPROM). (author)
Pegueroles, Josep R.; Alins, Juan J.; de la Cruz, Luis J.; Mata, Jorge
2001-07-01
MPEG family codecs generate variable-bit-rate (VBR) compressed video with significant multiple-time-scale bit rate variability. Smoothing techniques remove the periodic fluctuations generated by the codification modes. However, global efficiency concerning network resource allocation remains low due to scene-time-scale variability. RCBR techniques provide suitable means to achieving higher efficiency. Among all RCBR techniques described in literature, 2RCBR mechanism seems to be especially suitable for video-on demand. The method takes advantage of the knowledge of the stored video to calculate the renegotiation intervals and of the client buffer memory to perform work-ahead buffering techniques. 2RCBR achieves 100% bandwidth global efficiency with only two renegotiation levels. The algorithm is based on the study of the second derivative of the cumulative video sequence to find out sharp-sloped inflection points that point out changes in the scene complexity. Due to its nature, 2RCBR becomes very adequate to deliver MPEG2 scalable sequences into the network cause it can assure a constant bit rate to the base MPEG2 layer and use the higher rate intervals to deliver the enhanced MPEG2 layer. However, slight changes in the algorithm parameters must be introduced to attain an optimal behavior. This is verified by means of simulations on MPEG2 video patterns.
Moretti, M.; Janssen, G.J.M.
2000-01-01
The transmission modulation system minimizes the wasted 'out of band' power. The digital data (1) to be transmitted is fed via a pulse response filter (2) to a mixer (4) where it modulates a carrier wave (4). The digital data is also fed via a delay circuit (5) and identical filter (6) to a second m
The 95% confidence intervals of error rates and discriminant coefficients
Directory of Open Access Journals (Sweden)
Shuichi Shinmura
2015-02-01
Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.
Ma, Jing; Li, Kangning; Tan, Liying; Yu, Siyuan; Cao, Yubin
2016-02-01
The error rate performances and outage probabilities of free-space optical (FSO) communications with spatial diversity are studied for Gamma-Gamma turbulent environments. Equal gain combining (EGC) and selection combining (SC) diversity are considered as practical schemes to mitigate turbulence. The exact bit-error rate (BER) expression and outage probability are derived for direct detection EGC multiple aperture receiver system. BER performances and outage probabilities are analyzed and compared for different number of sub-apertures each having aperture area A with EGC and SC techniques. BER performances and outage probabilities of a single monolithic aperture and multiple aperture receiver system with the same total aperture area are compared under thermal-noise-limited and background-noise-limited conditions. It is shown that multiple aperture receiver system can greatly improve the system communication performances. And these analytical tools are useful in providing highly accurate error rate estimation for FSO communication systems.
International Nuclear Information System (INIS)
We have designed and simulated a circuit for the experimental determination of the rate of dynamic switching errors in high-temperature superconductor RSFQ circuits. The proposal is that a series-connected pair of Josephson junctions is read out by SFQ pulses circulating in a ring-shaped Josephson transmission line at high frequency. Suitable bias currents determine the switching thresholds of the junction pair. By measuring the voltage across the transmission line, it is proposed that the occurrence of a switching error can be detected. The bit error rate can be determined from the mean time before false switching together with the SFQ circulation frequency. The circuit design allows measurements over a wide temperature range. (author)
Celandroni, Nedo; Ferro, Erina; Mihal, Vlado; Potort?, Francesco
1992-01-01
This report describes the FODA system working at variable coding and bit rates (FODA/IBEA-TDMA) FODA/IBEA is the natural evolution of the FODA-TDMA satellite access scheme working at 2 Mbit/s fixed rate with data 1/2 coded or uncoded. FODA-TDMA was used in the European SATINE-II experiment [8]. We remind here that the term FODA/IBEA system is comprehensive of the FODA/IBEA-TDMA (1) satellite access scheme and of the hardware prototype realised by the Marconi R.C. (U.K.). Both of them come fro...
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
Peter McInerney; Paul Adams; Hadi, Masood Z.
2014-01-01
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differe...
The nearest neighbor and the bayes error rates.
Loizou, G; Maybank, S J
1987-02-01
The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal. PMID:21869395
All-optical wavelength conversion at bit rates above 10 Gb/s using semiconductor optical amplifiers
DEFF Research Database (Denmark)
Jørgensen, Carsten; Danielsen, Søren Lykke; Stubkjær, Kristian;
1997-01-01
This work assesses the prospects for high-speed all-optical wavelength conversion using the simple optical interaction with the gain in semiconductor optical amplifiers (SOAs) via the interband carrier recombination. Operation and design guidelines for conversion speeds above 10 Gb/s are described...... and the various tradeoffs are discussed. Experiments at bit rates up to 40 Gb/s are presented for both cross-gain modulation (XGM) and cross-phase modulation (XPM) in SOAs demonstrating the high-speed capability of these techniques...
Gao, Ya; Sun, Junqiang; Sima, Chaotan
2016-10-01
We propose an all-optical approach for simultaneous high bit-rate return-to-zero (RZ) to non-return-to-zero (NRZ) format and LP01 to LP11 mode conversion using a weakly tilted apodized few-mode fiber Bragg grating (TA-FM-FBG) with specific linear spectral response. The grating apodization profile is designed by utilizing an efficient inverse scattering algorithm and the maximum refractive index modulation is adjusted based on the grating tilt angle, according to Coupled-Mode Theory. The temporal performance and operation bandwidth of the converter are discussed. The approach provides potential favorable device for the connection of various communication systems.
Yilmaz, Ferkan
2012-07-01
Analysis of the average binary error probabilities (ABEP) and average capacity (AC) of wireless communications systems over generalized fading channels have been considered separately in past years. This paper introduces a novel moment generating function (MGF)-based unified expression for the ABEP and AC of single and multiple link communications with maximal ratio combining. In addition, this paper proposes the hyper-Fox\\'s H fading model as a unified fading distribution of a majority of the well-known generalized fading environments. As such, the authors offer a generic unified performance expression that can be easily calculated, and that is applicable to a wide variety of fading scenarios. The mathematical formulism is illustrated with some selected numerical examples that validate the correctness of the authors\\' newly derived results. © 1972-2012 IEEE.
Pulse shaping for all-optical signal processing of ultra-high bit rate serial data signals
DEFF Research Database (Denmark)
Palushani, Evarist
) between dispersed OTDM data and linearly chirped pump pulses. This resulted in spectral compression, enabling the OTDM tributaries to be converted directly onto a dense wavelength division multiplexing (DWDM) grid. The serial-to-parallel conversion was successfully demonstrated for up to 640-GBd OTDM......The following thesis concerns pulse shaping and optical waveform manipulation for all-optical signal processing of ultra-high bit rate serial data signals, including generation of optical pulses in the femtosecond regime, serial-to-parallel conversion and terabaud coherent optical time division...... record-high serial data rates on a single-wavelength channel. The experimental results demonstrate 5.1- and 10.2-Tbit/s OTDM data signals achieved by 16-ary quadrature amplitude modulation (16-QAM), polarization multiplexing and symbol rates as high as 640 GBd and 1.28 TBd. These signal were transmitted...
Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access
Zafar, Ammar
2012-12-29
In this paper, we present an optimal resource allocation scheme (ORA) for an all-participate(AP) cognitive relay network that minimizes the symbol error rate (SER). The SER is derived and different constraints are considered on the system. We consider the cases of both individual and global power constraints, individual constraints only and global constraints only. Numerical results show that the ORA scheme outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).
Directory of Open Access Journals (Sweden)
Balakrishna Konda
2012-11-01
Full Text Available Traditional Serial-Serial multiplier addresses the high data sampling rate. It is effectively considered as the entire partial product matrix with n data sampling cycle for n×n multiplication function instead of 2n cycles in the conventional multipliers. This multiplication of partial products by considering two series inputs among which one is starting from LSB the other from MSB. Using this feed sequence and accumulation technique it takes only n cycle to complete the partial products. It achieves high bit sampling rate by replacing conventional full adder and highest 5:3 counters. Here asynchronous 1’s counter is presented. This counter takes critical path is limited to only an AND gate and D flip-flops. Accumulation is integral part of serial multiplier design. 1’s counter is used to count the number of ones at the end of the nth iteration in each counter produces. The implemented multipliers consist of a serial-serial data accumulator module and carry save adder that occupies less silicon area than the full carry save adder. In this paper we implemented model address for the 8bit 2’s complement implementing the Baugh-wooley algorithm and unsigned multiplication implementing the architecture for 8×8 Serial-Serial unsigned multiplication.
Two-Bit Bit Flipping Decoding of LDPC Codes
Nguyen, Dung Viet; Marcellin, Michael W
2011-01-01
In this paper, we propose a new class of bit flipping algorithms for low-density parity-check (LDPC) codes over the binary symmetric channel (BSC). Compared to the regular (parallel or serial) bit flipping algorithms, the proposed algorithms employ one additional bit at a variable node to represent its "strength." The introduction of this additional bit increases the guaranteed error correction capability by a factor of at least 2. An additional bit can also be employed at a check node to capture information which is beneficial to decoding. A framework for failure analysis of the proposed algorithms is described. These algorithms outperform the Gallager A/B algorithm and the min-sum algorithm at much lower complexity. Concatenation of two-bit bit flipping algorithms show a potential to approach the performance of belief propagation (BP) decoding in the error floor region, also at lower complexity.
A Video Watermarking DRM Metho d Based on H.264 Compressed Domain with Low Bit-Rate Increasement
Institute of Scientific and Technical Information of China (English)
MA Zhaofeng; HUANG Jianqing; JIANG Ming; NIU Xinxin
2016-01-01
It gets more and more important for video copyright protection, it is necessary to provide eﬃcient H.264 compression domain watermarking for video Digital rights management(DRM). A new watermarking method based on H.264 compressed domain is proposed for video DRM, in which the embedding and extracting procedure are performed using the syntactic elements of the com-pressed bit stream. In this way, complete decoding is un-necessary in both embedding and extracting processes. Based on the analysis of the time and space, some appro-priate sub-blocks are selected for embedding watermarks, increasing watermark robustness while reducing the dec-lination of the visual quality. In order to avoid bit-rate increasing and strengthen the security of the proposed video watermarking scheme, only a set of nonzero coeﬃ-cients quantized in different parts of macroblocks is chosen for inserting watermark. The experiment results show the proposed scheme can achieve excellent robustness against some common attacks, the proposed scheme is secure and eﬃcient for video content DRM protection.
Bit-padding information guided channel hopping
Yang, Yuli
2011-02-01
In the context of multiple-input multiple-output (MIMO) communications, we propose a bit-padding information guided channel hopping (BP-IGCH) scheme which breaks the limitation that the number of transmit antennas has to be a power of two based on the IGCH concept. The proposed scheme prescribes different bit-lengths to be mapped onto the indices of the transmit antennas and then uses padding technique to avoid error propagation. Numerical results and comparisons, on both the capacity and the bit error rate performances, are provided and show the advantage of the proposed scheme. The BP-IGCH scheme not only offers lower complexity to realize the design flexibility, but also achieves better performance. © 2011 IEEE.
Roncin, Vincent; Gay, Mathilde; Bramerie, Laurent; Simon, Jean-Claude
2014-01-01
This paper presents a theoretical and experimental investigation of optical signal regeneration properties of a non-linear optical loop mirror using a semiconductor optical amplifier as the active element (SOA-NOLM). While this device has been extensively studied for optical time division demultiplexing (OTDM) and wavelength conversion applications, our proposed approach, based on a reflective configuration, has not yet been investigated, particularly in the light of signal regeneration. The impact on the transfer function shape of different parameters, like SOA position in the interferometer and SOA input optical powers, are numerically studied to appreciate the regenerative capabilities of the device.Regenerative performances in association with a dual stage of SOA to create a 3R regenerator which preserves the data polarity and the wavelength are experimentally assessed. Thanks to this complete regenerative function, a 100.000 km error free transmission has experimentally been achieved at 10 Gb/s in a reci...
Gilchrist, N. H. C.
A draft of a new recommendation on low bit-rate digital audio coding for broadcasting is in preparation within CCIR Study Group 10. As part of this work, subjective tests are being conducted to determine the preferred coding systems to be used in the various applications, and at which bit rates they should be used. The BBC has been contributing to the work by conducting preliminary listening tests to select critical program material, and by preparing recordings using this material for use by the CCIR's testing centers.
Forensic watermarking and bit-rate conversion of partially encrypted AAC bitstreams
Lemma, Aweke; Katzenbeisser, Stefan; Celik, Mehmet U.; Kirbiz, S.
2008-02-01
Electronic Music Distribution (EMD) is undergoing two fundamental shifts. The delivery over wired broadband networks to personal computers is being replaced by delivery over heterogeneous wired and wireless networks, e.g. 3G and Wi-Fi, to a range of devices such as mobile phones, game consoles and in-car players. Moreover, restrictive DRM models bound to a limited set of devices are being replaced by flexible standards-based DRM schemes and increasingly forensic tracking technologies based on watermarking. Success of these EMD services will partially depend on scalable, low-complexity and bandwidth eficient content protection systems. In this context, we propose a new partial encryption scheme for Advanced Audio Coding (AAC) compressed audio which is particularly suitable for emerging EMD applications. The scheme encrypts only the scale-factor information in the AAC bitstream with an additive one-time-pad. This allows intermediate network nodes to transcode the bitstream to lower data rates without accessing the decryption keys, by increasing the scale-factor values and re-quantizing the corresponding spectral coeficients. Furthermore, the decryption key for each user is customized such that the decryption process imprints the audio with a unique forensic tracking watermark. This constitutes a secure, low-complexity watermark embedding process at the destination node, i.e. the player. As opposed to server-side embedding methods, the proposed scheme lowers the computational burden on servers and allows for network level bandwidth saving measures such as multi-casting and caching.
Inter-bit prediction based on maximum likelihood estimate for distributed video coding
Klepko, Robert; Wang, Demin; Huchet, Grégory
2010-01-01
Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.
Measuring of block error rates in high-speed digital networks
Petr Ivaniga; Ludovit Mikus
2006-01-01
Error characteristics is a decisive factor for the digital networks transmission quality definition. The ITU – T G.826 and G.828 recommendations identify error parameters for high – speed digital networks in relation to G.821 recommendation. The paper describes the relations between individual error parameters and the error rate assuming that these are invariant in terms of time.
Measuring of Block Error Rates in High-Speed Digital Networks
Directory of Open Access Journals (Sweden)
Petr Ivaniga
2006-01-01
Full Text Available Error characteristics is a decisive factor for the digital networks transmission quality definition. The ITU – TG.826 and G.828 recommendations identify error parameters for high – speed digital networks in relation to G.821 recommendation. The paper describes the relations between individual error parameters and the error rate assuming that theseare invariant in terms of time.
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation
Swift, G.
2002-01-01
JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.
Yang, Aiying; Li, Xiangming; Jiang, Tao
2012-04-23
Combination of overlapping pulse position modulation and pulse width modulation at the transmitter and grouped bit-flipping algorithm for low-density parity-check decoding at the receiver are proposed for visible Light Emitting Diode (LED) indoor communication system in this paper. The results demonstrate that, with the same Photodetector, the bit rate can be increased and the performance of the communication system can be improved by the scheme we proposed. Compared with the standard bit-flipping algorithm, the grouped bit-flipping algorithm can achieve more than 2.0 dB coding gain at bit error rate of 10-5. By optimizing the encoding of overlapping pulse position modulation and pulse width modulation symbol, the performance can be further improved. It is reasonably expected that the bit rate can be upgraded to 400 Mbit/s with a single available LED, thus transmission rate beyond 1 Gbit/s is foreseen by RGB LEDs.
A FAST BIT-LOADING ALGORITHM FOR HIGH SPEED POWER LINE COMMUNICATIONS
Institute of Scientific and Technical Information of China (English)
Zhang Shengqing; Zhao Li; Zou Cairong
2012-01-01
Adaptive bit-loading is a key technology in high speed power line communications with the Orthogonal Frequency Division Multiplexing (OFDM) modulation technology.According to the real situation of the transmitting power spectrum limited in high speed power line communications,this paper explored the adaptive bit loading algorithm to maximize transmission bit number when transmitting power spectral density and bit error rate are not exceed upper limit.With the characteristics of the power line channel,first of all,it obtains the optimal bit loading algorithm,and then provides the improved algorithm to reduce the computational complexity.Based on the analysis and simulation,it offers a non-iterative bit allocation algorithm,and finally the simulation shows that this new algorithm can greatly reduce the computational complexity,and the actual bit allocation results close to optimal.
International Nuclear Information System (INIS)
Data indicates that about one half of all errors are skill based. Yet, most of the emphasis is focused on correcting rule and knowledge based errors leading to more programs, supervision, and training. None of this corrective action applies to the 'mental lapse' error. Skill based errors are usually committed in performing a routine and familiar task. Workers went to the wrong unit or component, or wrong something. Too often some of these errors result in reactor scrams, turbine trips, or other unwanted actuation. The workers do not need more programs, supervision, or training. They need to know when they are vulnerable and they need to know how to think. Self check can prevent errors, but only if it is practiced intellectually, and with commitment. Skill based errors are usually the result of using habits and senses instead of using our intellect. Even human factors can play a role in the cause of an error on a routine task. Personal injury also, is usually an error. Sometimes they are called accidents, but most accidents are the result of inappropriate actions. Whether we can explain it or not, cause and effect were there. A proper attitude toward risk, and a proper attitude toward danger is requisite to avoiding injury. Many personal injuries can be avoided just by attitude. Errors, based on personal experience and interviews, examines the reasons for the 'mental lapse' errors, and why some of us become injured. The paper offers corrective action without more programs, supervision, and training. It does ask you to think differently. (author)
Roy, Urmimala; Register, Leonard F; Banerjee, Sanjay K
2016-01-01
Spin-transfer-torque random access memory (STT-RAM) is a promising candidate for the next-generation of random-access-memory due to improved scalability, read-write speeds and endurance. However, the write pulse duration must be long enough to ensure a low write error rate (WER), the probability that a bit will remain unswitched after the write pulse is turned off, in the presence of stochastic thermal effects. WERs on the scale of 10$^{-9}$ or lower are desired. Within a macrospin approximation, WERs can be calculated analytically using the Fokker-Planck method to this point and beyond. However, dynamic micromagnetic effects within the bit can affect and lead to faster switching. Such micromagnetic effects can be addressed via numerical solution of the stochastic Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation. However, determining WERs approaching 10$^{-9}$ would require well over 10$^{9}$ such independent simulations, which is infeasible. In this work, we explore calculation of WER using "rare event en...
Takahashi, Koji; Matsui, Hideki; Nagashima, Tomotaka; Konishi, Tsuyoshi
2013-11-15
We demonstrate a resolution upgrade toward 6 bit optical quantization using a power-to-wavelength conversion without an increment of system parallelism. Expansion of a full-scale input range is employed in conjunction with reduction of a quantization step size with keeping a sampling-rate transparency characteristic over several 100 sGS/s. The effective number of bits is estimated to 5.74 bit, and the integral nonlinearity error and differential nonlinearity error are estimated to less than 1 least significant bit. PMID:24322152
Chuang, I L; Chuang, Isaac L; Yamamoto, Yoshihisa
1996-01-01
Decoherence and loss will limit the practicality of quantum cryptography and computing unless successful error correction techniques are developed. To this end, we have discovered a new scheme for perfectly detecting and rejecting the error caused by loss (amplitude damping to a reservoir at T=0), based on using a dual-rail representation of a quantum bit. This is possible because (1) balanced loss does not perform a ``which-path'' measurement in an interferometer, and (2) balanced quantum nondemolition measurement of the ``total'' photon number can be used to detect loss-induced quantum jumps without disturbing the quantum coherence essential to the quantum bit. Our results are immediately applicable to optical quantum computers using single photonics devices.
Error Rates in Users of Automatic Face Recognition Software.
Directory of Open Access Journals (Sweden)
David White
Full Text Available In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated 'candidate lists' selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers-who use the system in their daily work-and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced "facial examiners" outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems-potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.
Conserved rates and patterns of transcription errors across bacterial growth states and lifestyles.
Traverse, Charles C; Ochman, Howard
2016-03-22
Errors that occur during transcription have received much less attention than the mutations that occur in DNA because transcription errors are not heritable and usually result in a very limited number of altered proteins. However, transcription error rates are typically several orders of magnitude higher than the mutation rate. Also, individual transcripts can be translated multiple times, so a single error can have substantial effects on the pool of proteins. Transcription errors can also contribute to cellular noise, thereby influencing cell survival under stressful conditions, such as starvation or antibiotic stress. Implementing a method that captures transcription errors genome-wide, we measured the rates and spectra of transcription errors in Escherichia coli and in endosymbionts for which mutation and/or substitution rates are greatly elevated over those of E. coli Under all tested conditions, across all species, and even for different categories of RNA sequences (mRNA and rRNAs), there were no significant differences in rates of transcription errors, which ranged from 2.3 × 10(-5) per nucleotide in mRNA of the endosymbiont Buchnera aphidicola to 5.2 × 10(-5) per nucleotide in rRNA of the endosymbiont Carsonella ruddii The similarity of transcription error rates in these bacterial endosymbionts to that in E. coli (4.63 × 10(-5) per nucleotide) is all the more surprising given that genomic erosion has resulted in the loss of transcription fidelity factors in both Buchnera and Carsonella.
Steven D. Levitt
1995-01-01
A strong, negative empirical correlation exists between arrest rates and reported crime rates. While this relationship has often been interpreted as support for the deterrence hypothesis, it is equally consistent with incapacitation effects, and/or a spurious correlation that would be induced by measurement error in reported crime rates. This paper attempts to discriminate between deterrence, incapacitation, and measurement error as explanations for the empirical relationship between arrest r...
Error Resilient Video Compression Using Behavior Models
Directory of Open Access Journals (Sweden)
Jacco R. Taal
2004-03-01
Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.
Nickerson, Naomi H; Li, Ying; Benjamin, Simon C
2013-01-01
A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
Switching field distribution of exchange coupled ferri-/ferromagnetic composite bit patterned media
Oezelt, Harald; Fischbacher, Johann; Matthes, Patrick; Kirk, Eugenie; Wohlhüter, Phillip; Heyderman, Laura Jane; Albrecht, Manfred; Schrefl, Thomas
2016-01-01
We investigate the switching field distribution and the resulting bit error rate of exchange coupled ferri-/ferromagnetic bilayer island arrays by micromagnetic simulations. Using islands with varying microstructure and anisotropic properties, the intrinsic switching field distribution is computed. The dipolar contribution to the switching field distribution is obtained separately by using a model of a hexagonal island array resembling $1.4\\,\\mathrm{Tb/in}^2$ bit patterned media. Both contributions are computed for different thickness of the soft exchange coupled ferrimagnet and also for ferromagnetic single phase FePt islands. A bit patterned media with a bilayer structure of FeGd($5\\,\\mathrm{nm}$)/FePt($5\\,\\mathrm{nm}$) shows a bit error rate of $10^{-4}$ with a write field of $1.2\\,\\mathrm{T}$.
Analysis and Methodology Study of Bit Error Performance of FSO System%无线光通信系统误码性能分析及方法研究
Institute of Scientific and Technical Information of China (English)
贾科军; 赵延刚; 陈辉; 薛建彬; 王惠琴
2012-01-01
Bit error rate (BER) is an important evaluation index of free space optical communication (FSO), and how to exactly get the BER statistics is rather important. We study the technique of analyzing the BER of FSO based on Monte Carlo simulation using Matlab software. The FSO system and the principle of Monte Carlo simulation are introduced. The method of getting the BER statistics is researched, and the method of generating information source, the channel model, and the calculation of signal-to-noise ratio (SNR) parameter for simulation are given detailedly. Moreover, part of the core Matlab program is presented. The modeling and simulation based on the low-density parity check (LDPC) code and pulse position modulation (PPM) are implementted. The analysis results under different conditions of weather and signal-to-noise ratio indicate the this method is accurate and pratical.%误码率(BER)是衡量无线光通信系统设计优劣的重要指标,如何正确统计误码性能显得尤为重要.利用Matlab软件,研究基于蒙特卡罗仿真实现无线光通信系统误码性能分析的方法.介绍了无线光通信系统以及利用蒙特卡罗仿真进行性能估计的原理；研究了误码性能仿真方法,详细地给出了仿真中的信源产生方法、信道模型、信噪比参数计算方法、误码率统计方法等,并给出了部分核心Matlab程序；给出了基于低密度奇偶校验(LDPC)码、脉冲位置调制(PPM)的无线光通信系统仿真图,详细介绍了各仿真参数的设置,并在不同天气和不同信噪比条件下统计了系统性能.统计结果表明,此性能分析方法准确可行.
Adjustable Nyquist-rate System for Single-Bit Sigma-Delta ADC with Alternative FIR Architecture
Frick, Vincent; Dadouche, Foudil; Berviller, Hervé
2016-09-01
This paper presents a new smart and compact system dedicated to control the output sampling frequency of an analogue-to-digital converters (ADC) based on single-bit sigma-delta (ΣΔ) modulator. This system dramatically improves the spectral analysis capabilities of power network analysers (power meters) by adjusting the ADC's sampling frequency to the input signal's fundamental frequency with a few parts per million accuracy. The trade-off between straightforwardness and performance that motivated the choice of the ADC's architecture are preliminary discussed. It particularly comes along with design considerations of an ultra-steep direct-form FIR that is optimised in terms of size and operating speed. Thanks to compact standard VHDL language description, the architecture of the proposed system is particularly suitable for application-specific integrated circuit (ASIC) implementation-oriented low-power and low-cost power meter applications. Field programmable gate array (FPGA) prototyping and experimental results validate the adjustable sampling frequency concept. They also show that the system can perform better in terms of implementation and power capabilities compared to dedicated IP resources.
Bits of String and Bits of Branes
Bergman, Oren
1996-01-01
String-bit models are both an efficient way of organizing string perturbation theory, and a possible non-perturbative composite description of string theory. This is a summary of ideas and results of string-bit and superstring-bit models, as presented in the Strings '96 conference.
Zollanvari, Amin; Genton, Marc G
2013-08-01
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Zollanvari, Amin
2013-05-24
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
On zero-rate error exponent for BSC with noisy feedback
Burnashev, Marat V
2008-01-01
For the information transmission a binary symmetric channel is used. There is also another noisy binary symmetric channel (feedback channel), and the transmitter observes without delay all the outputs of the forward channel via that feedback channel. The transmission of a nonexponential number of messages (i.e. the transmission rate equals zero) is considered. The achievable decoding error exponent for such a combination of channels is investigated. It is shown that if the crossover probability of the feedback channel is less than a certain positive value, then the achievable error exponent is better than the similar error exponent of the no-feedback channel. The transmission method described and the corresponding lower bound for the error exponent can be strengthened, and also extended to the positive transmission rates.
Theoretical Limits on Errors and Acquisition Rates in Localizing Switchable Fluorophores
Small, Alexander R
2008-01-01
A variety of recent imaging techniques are able to beat the diffraction limit in fluorescence microcopy by activating and localizing subsets of the fluorescent molecules in the specimen, and repeating this process until all of the molecules have been imaged. In these techniques there is a tradeoff between speed (activating more molecules per imaging cycle) and error rates (activating more molecules risks producing overlapping images that hide information on molecular positions), and so intelligent image-processing approaches are needed to identify and reject overlapping images. We introduce here a formalism for defining error rates, derive a general relationship between error rates, image acquisition rates, and the performance characteristics of the image processing algorithms, and show that there is a minimum acquisition time irrespective of algorithm performance. We also consider algorithms that can infer molecular positions from images of overlapping blurs, and derive the dependence of the minimimum acquis...
Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors
Energy Technology Data Exchange (ETDEWEB)
Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)
2011-02-15
Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa
The Effect of Government Size on the Steady-State Unemployment Rate: An Error Correction Model
Burton A. Abrams; Siyan Wang
2007-01-01
The relationship between government size and the unemployment rate is investigated using an error-correction model that describes both the short-run dynamics and long-run determination of the unemployment rate. Using data from twenty OECD countries from 1970 to 1999 and after correcting for simultaneity bias, we find that government size, measured as total government outlays as a percentage of GDP, plays a significant role in affecting the steady-state unemployment rate. Importantly, when gov...
A novel multitemporal insar model for joint estimation of deformation rates and orbital errors
Zhang, Lei
2014-06-01
Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.
Quantifying the Impact of Single Bit Flips on Floating Point Arithmetic
Energy Technology Data Exchange (ETDEWEB)
Elliott, James J [ORNL; Mueller, Frank [North Carolina State University; Stoyanov, Miroslav K [ORNL; Webster, Clayton G [ORNL
2013-08-01
In high-end computing, the collective surface area, smaller fabrication sizes, and increasing density of components have led to an increase in the number of observed bit flips. If mechanisms are not in place to detect them, such flips produce silent errors, i.e. the code returns a result that deviates from the desired solution by more than the allowed tolerance and the discrepancy cannot be distinguished from the standard numerical error associated with the algorithm. These phenomena are believed to occur more frequently in DRAM, but logic gates, arithmetic units, and other circuits are also susceptible to bit flips. Previous work has focused on algorithmic techniques for detecting and correcting bit flips in specific data structures, however, they suffer from lack of generality and often times cannot be implemented in heterogeneous computing environment. Our work takes a novel approach to this problem. We focus on quantifying the impact of a single bit flip on specific floating-point operations. We analyze the error induced by flipping specific bits in the most widely used IEEE floating-point representation in an architecture-agnostic manner, i.e., without requiring proprietary information such as bit flip rates and the vendor-specific circuit designs. We initially study dot products of vectors and demonstrate that not all bit flips create a large error and, more importantly, expected value of the relative magnitude of the error is very sensitive on the bit pattern of the binary representation of the exponent, which strongly depends on scaling. Our results are derived analytically and then verified experimentally with Monte Carlo sampling of random vectors. Furthermore, we consider the natural resilience properties of solvers based on the fixed point iteration and we demonstrate how the resilience of the Jacobi method for linear equations can be significantly improved by rescaling the associated matrix.
A VIDEO COMPRESSION SCHEME WITH LOW-COMPLEXITY AND LOW BIT-RATE%一种低复杂度低码率视频压缩方案
Institute of Scientific and Technical Information of China (English)
胡建平; 谢正光
2014-01-01
以H．264视频编码的低复杂度低码率应用为研究对象，分别就编码中关键部分如DCT变换量化模块、预测编码模块、降噪及去块效应等进行深入分析、理论研究和实验仿真，给出实现方案。主要有降低编码复杂度的全零DCT系数块预判、运动估计及模式选择早结束、自适应降噪去块效应以提高图像质量的预处理等。仿真结果表明，该方案达到了低码率低复杂度高质量视频压缩的目的。%In this paper we take low-complexity and low bit-rate application of H.264 video coding as the research object, and conduct through study, theoretical analysis and experimental simulation on key components in coding such as DCT conversion quantisation module, prediction coding module, de-noising and de-blocking effect, etc., and propose the implementation scheme.In main there are the all-zero DCT coefficients block (AZB) detection for reducing the complexity of coding, the early termination of motion estimation and mode selection, and pre-processing of adaptive denoising and de-blocking effect for improving image quality, etc.Simulation results demonstrate that this scheme reaches the goal of high quality video compression with low bit-rate and low-complexity.
Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies
Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.
2010-01-01
We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.
Zhongming, Wang; Zhibin, Yao; Hongxia, Guo; Min, Lu
2011-05-01
SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach.
A low-power 10-bit 250-KSPS cyclic ADC with offset and mismatch correction*
Institute of Scientific and Technical Information of China (English)
Zhao Hongliang; Zhao Yiqiang; Geng Junfeng; Li Peng; Zhang Zhisheng
2011-01-01
A low power 10-bit 250-k sample per second (KSPS) cyclic analog to digital converter (ADC) is presented. The ADC's offset errors are successfully cancelled out through the proper choice of a capacitor switching sequence. The improved redundant signed digit algorithm used in the ADC can tolerate high levels of the comparator's offset errors and switched capacitor mismatch errors. With this structure, it has the advantages of simple circuit configuration, small chip area and low power dissipation. The cyclic ADC manufactured with the Chartered 0.35 μm 2P4M process shows a 58.5 dB signal to noise and distortion ratio and a 9.4 bit effective number of bits at a 250 KSPS sample rate. It dissipates 0.72 mW with a 3.3 V power supply and occupies dimensions of 0.42 × 0.68 mm2.
Bányai, László; Patthy, László
2016-08-01
A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation.
Tissue pattern recognition error rates and tumor heterogeneity in gastric cancer.
Potts, Steven J; Huff, Sarah E; Lange, Holger; Zakharov, Vladislav; Eberhard, David A; Krueger, Joseph S; Hicks, David G; Young, George David; Johnson, Trevor; Whitney-Miller, Christa L
2013-01-01
The anatomic pathology discipline is slowly moving toward a digital workflow, where pathologists will evaluate whole-slide images on a computer monitor rather than glass slides through a microscope. One of the driving factors in this workflow is computer-assisted scoring, which depends on appropriate selection of regions of interest. With advances in tissue pattern recognition techniques, a more precise region of the tissue can be evaluated, no longer bound by the pathologist's patience in manually outlining target tissue areas. Pathologists use entire tissues from which to determine a score in a region of interest when making manual immunohistochemistry assessments. Tissue pattern recognition theoretically offers this same advantage; however, error rates exist in any tissue pattern recognition program, and these error rates contribute to errors in the overall score. To provide a real-world example of tissue pattern recognition, 11 HER2-stained upper gastrointestinal malignancies with high heterogeneity were evaluated. HER2 scoring of gastric cancer was chosen due to its increasing importance in gastrointestinal disease. A method is introduced for quantifying the error rates of tissue pattern recognition. The trade-off between fully sampling tumor with a given tissue pattern recognition error rate versus randomly sampling a limited number of fields of view with higher target accuracy was modeled with a Monte-Carlo simulation. Under most scenarios, stereological methods of sampling-limited fields of view outperformed whole-slide tissue pattern recognition approaches for accurate immunohistochemistry analysis. The importance of educating pathologists in the use of statistical sampling is discussed, along with the emerging role of hybrid whole-tissue imaging and stereological approaches.
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533
Study on Cell Error Rate of a Satellite ATM System Based on CDMA
Institute of Scientific and Technical Information of China (English)
赵彤宇; 张乃通
2003-01-01
In this paper, the cell error rate (CER) of a CDMA-based satellite ATM system is analyzed. Two fading models, i.e. the partial fading model and the total fading model are presented according to multi-path propagation fading and shadow effect. Based on the total shadow model, the relation of CER vs. the number of subscribers at various elevations under 2D-RAKE receiving and non-diversity receiving is got. The impact on cell error rate with pseudo noise (PN) code length is also considered. The result that the maximum likelihood combination of multi-path signal would not improve the system performance when multiple access interference (MAI) is small, on the contrary the performance may be even worse is abtained.
Smadi, Mahmoud A.
2012-12-06
In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.
Chen, Jian; Dutton, Zachary; Lazarus, Richard; Guha, Saikat
2011-01-01
The quantum states of two laser pulses---coherent states---are never mutually orthogonal, making perfect discrimination impossible. Even so, coherent states can achieve the ultimate quantum limit for capacity of a classical channel, the Holevo capacity. Attaining this requires the receiver to make joint-detection measurements on long codeword blocks, optical implementations of which remain unknown. We report the first experimental demonstration of a joint-detection receiver, demodulating quaternary pulse-position-modulation (PPM) codewords at a word error rate of up to 40% (2.2 dB) below that attained with direct-detection, the largest error-rate improvement over the standard quantum limit reported to date. This is accomplished with a conditional nulling receiver, which uses optimized-amplitude coherent pulse nulling, single photon detection and quantum feedforward. We further show how this translates into coding complexity improvements for practical PPM systems, such as in deep-space communication. We antici...
LaPorte, Gerald M; Stephens, Joseph C; Beuchel, Amanda K
2010-01-01
The examination of printing defects, or imperfections, found on printed or copied documents has been recognized as a generally accepted approach for linking questioned documents to a common source. This research paper will highlight the results from two mutually exclusive studies. The first involved the examination and characterization of printing defects found in a controlled production run of 500,000 envelopes bearing text and images. It was concluded that printing defects are random occurrences and that morphological differences can be used to identify variations within the same production batch. The second part incorporated a blind study to assess the error rate of associating randomly selected envelopes from different retail locations to a known source. The examination was based on the comparison of printing defects in the security patterns found in some envelopes. The results demonstrated that it is possible to associate envelopes to a common origin with a 0% error rate.
A minimum-error, energy-constrained neural code is an instantaneous-rate code.
Johnson, Erik C; Jones, Douglas L; Ratnam, Rama
2016-04-01
Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals.
Jeffrey H. Bergstrand; Egger, Peter
2011-01-01
Bilateral investment treaties (BITs) have proliferated over the past 50 years such that the number of pairs of countries with BITs is roughly as large as the number of country-pairs that belong to bilateral or regional preferential trade agreements (PTAs). The purpose of this study is to provide the first systematic empirical analysis of the economic determinants of BITs and of the likelihood of BITs between pairs of countries using a qualitative choice model, and in a manner consistent with ...
Schreiber, Jacob; Wescoe, Zachary L; Abu-Shumays, Robin; Vivian, John T; Baatar, Baldandorj; Karplus, Kevin; Akeson, Mark
2013-11-19
Cytosine, 5-methylcytosine, and 5-hydroxymethylcytosine were identified during translocation of single DNA template strands through a modified Mycobacterium smegmatis porin A (M2MspA) nanopore under control of phi29 DNA polymerase. This identification was based on three consecutive ionic current states that correspond to passage of modified or unmodified CG dinucleotides and their immediate neighbors through the nanopore limiting aperture. To establish quality scores for these calls, we examined ~3,300 translocation events for 48 distinct DNA constructs. Each experiment analyzed a mixture of cytosine-, 5-methylcytosine-, and 5-hydroxymethylcytosine-bearing DNA strands that contained a marker that independently established the correct cytosine methylation status at the target CG of each molecule tested. To calculate error rates for these calls, we established decision boundaries using a variety of machine-learning methods. These error rates depended upon the identity of the bases immediately 5' and 3' of the targeted CG dinucleotide, and ranged from 1.7% to 12.2% for a single-pass read. We estimate that Q40 values (0.01% error rates) for methylation status calls could be achieved by reading single molecules 5-19 times depending upon sequence context. PMID:24167260
Learning High-Dimensional Markov Forest Distributions: Analysis of Error Rates
Tan, Vincent Y F; Willsky, Alan S
2010-01-01
The problem of learning forest-structured discrete graphical models from i.i.d. samples is considered. An algorithm based on pruning of the Chow-Liu tree through adaptive thresholding is proposed. It is shown that this algorithm is both structurally consistent and risk consistent and the error probability of structure learning decays faster than any polynomial in the number of samples under fixed model size. For the high-dimensional scenario where the size of the model d and the number of edges k scale with the number of samples n, sufficient conditions on (n,d,k) are given for the algorithm to satisfy structural and risk consistencies. In addition, the extremal structures for learning are identified; we prove that the independent (resp. tree) model is the hardest (resp. easiest) to learn using the proposed algorithm in terms of error rates for structure learning.
Institute of Scientific and Technical Information of China (English)
SUN Liuquan; ZHENG Zhongguo
1999-01-01
A central limit theorem for the integrated square error (ISE)of the kernel hazard rate estimators is obtained based on left truncated and right censored data.An asymptotic representation of the mean integrated square error(MISE) for the kernel hazard rate estimators is also presented.
Forward error correction based on algebraic-geometric theory
A Alzubi, Jafar; M Chen, Thomas
2014-01-01
This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.
Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise
Souri, Hamza
2015-06-01
The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.
Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise
DEFF Research Database (Denmark)
Christensen, Lars P.B.
2005-01-01
Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...... can be achieved by a Multiple-Input Multiple- Output(MIMO) whitening filter followed by a traditional BCJR algorithm. The Gauss-Markov noise model provides a reasonable approximation for co-channel interference, making it an interesting single-user detector for many multiuser communication systems...
Error rate performance of FH/DPSK system in EMP environments
International Nuclear Information System (INIS)
In this paper, the effect of nuclear EMP interference on FH/DPSK system performance has been analyzed. EMP-induced interferer at recevier is modeled as an exponential damped sinusoidal wave in time. The error rate equation of received FH/DPSK signal has been derived and evaluated in terms of M(ary number) , CIR(carrier power to initial interference power ratio), and α(damping factor). The numerical results are given in graphs to discuss the EMP-induced interference effect on the FH/DPSK system performance. (Author)
Study of the Switching Errors in an RSFQ Switch by Using a Computerized Test Setup
International Nuclear Information System (INIS)
The problem of fluctuation-induced digital errors in a rapid single flux quantum (RSFQ) circuit has been a very important issue. In this work, we calculated the bit error rate of an RSFQ switch used in superconductive arithmetic logic unit (ALU). RSFQ switch should have a very low error rate in the optimal bias. Theoretical estimates of the RSFQ error rate are on the order of 10-50per bit operation. In this experiment, we prepared two identical circuits placed in parallel. Each circuit was composed of 10 Josephson transmission lines (JTLs) connected in series with an RSFQ switch placed in the middle of the 10 JTLs. We used a splitter to feed the same input signal to both circuits. The outputs of the two circuits were compared with an RSFQ exclusive OR (XOR) to measure the bit error rate of the RSFQ switch. By using a computerized bit-error-rate test setup, we measured the bit error rate of 2.18 x 10-12 when the bias to the RSFQ switch was 0.398 mA that was quite off from the optimum bias of 0.6 mA.
The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.
Fadaee, Shannon B; Migliaccio, Americo A
2016-04-01
The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation. PMID:26715411
The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.
Fadaee, Shannon B; Migliaccio, Americo A
2016-04-01
The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation.
Modified Golden Codes for Improved Error Rates Through Low Complex Sphere Decoder
Directory of Open Access Journals (Sweden)
K.Thilagam
2013-05-01
Full Text Available n recent years, the golden codes have proven to ex hibit a superior performance in a wireless MIMO (Multiple Input Multiple Output scenario than any other code. However, a serious limitation associated with it is its increased deco ding complexity. This paper attempts to resolve this challenge through suitable modification of gol den code such that a less complex sphere decoder could be used without much compromising the error rates. In this paper, a minimum polynomial equation is introduced to obtain a reduc ed golden ratio (RGR number for golden code which demands only for a low complexity decodi ng procedure. One of the attractive approaches used in this paper is that the effective channel matrix has been exploited to perform a single symbol wise decoding instead of grouped sy mbols using a sphere decoder with tree search algorithm. It has been observed that the low decoding complexity of O (q 1.5 is obtained against conventional method of O (q 2.5 . Simulation analysis envisages that in addition t o reduced decoding, improved error rates is also obta ined.
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Partly linear regression model is useful in practice, but littleis investigated in the literature to adapt it to the real data which are dependent and conditionally heteroscedastic. In this paper, the estimators of the regression components are constructed via local polynomial fitting and the large sample properties are explored. Under certain mild regularities, the conditions are obtained to ensure that the estimators of the nonparametric component and its derivatives are consistent up to the convergence rates which are optimal in the i.i.d. case, and the estimator of the parametric component is root-n consistent with the same rate as for parametric model. The technique adopted in the proof differs from that used and corrects the errors in the reference by Hamilton and Truong under i.i.d. samples.
Celik, Cihangir
Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano
Stinger Enhanced Drill Bits For EGS
Energy Technology Data Exchange (ETDEWEB)
Durrand, Christopher J. [Novatek International, Inc., Provo, UT (United States); Skeem, Marcus R. [Novatek International, Inc., Provo, UT (United States); Crockett, Ron B. [Novatek International, Inc., Provo, UT (United States); Hall, David R. [Novatek International, Inc., Provo, UT (United States)
2013-04-29
The project objectives were to design, engineer, test, and commercialize a drill bit suitable for drilling in hard rock and high temperature environments (10,000 meters) likely to be encountered in drilling enhanced geothermal wells. The goal is provide a drill bit that can aid in the increased penetration rate of three times over conventional drilling. Novatek has sought to leverage its polycrystalline diamond technology and a new conical cutter shape, known as the Stinger®, for this purpose. Novatek has developed a fixed bladed bit, known as the JackBit®, populated with both shear cutter and Stingers that is currently being tested by major drilling companies for geothermal and oil and gas applications. The JackBit concept comprises a fixed bladed bit with a center indenter, referred to as the Jack. The JackBit has been extensively tested in the lab and in the field. The JackBit has been transferred to a major bit manufacturer and oil service company. Except for the attached published reports all other information is confidential.
A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems
Directory of Open Access Journals (Sweden)
Dong Shi-Wei
2007-01-01
Full Text Available A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL test loops show that the proposed algorithm is efficient for practical DMT transmissions.
Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah
2016-07-01
Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown. Here, we estimated genotyping error rates in SNPs genotyped with double digest RAD sequencing from Mendelian incompatibilities in known mother-offspring dyads of Hoffman's two-toed sloth (Choloepus hoffmanni) across a range of coverage and sequence quality criteria, for both reference-aligned and de novo-assembled data sets. Genotyping error rates were more sensitive to coverage than sequence quality and low coverage yielded high error rates, particularly in de novo-assembled data sets. For example, coverage ≥5 yielded median genotyping error rates of ≥0.03 and ≥0.11 in reference-aligned and de novo-assembled data sets, respectively. Genotyping error rates declined to ≤0.01 in reference-aligned data sets with a coverage ≥30, but remained ≥0.04 in the de novo-assembled data sets. We observed approximately 10- and 13-fold declines in the number of loci sampled in the reference-aligned and de novo-assembled data sets when coverage was increased from ≥5 to ≥30 at quality score ≥30, respectively. Finally, we assessed the effects of genotyping coverage on a common population genetic application, parentage assignments, and showed that the proportion of incorrectly assigned maternities was relatively high at low coverage. Overall, our results suggest that the trade-off between sample size and genotyping error rates be considered prior to building sequencing libraries, reporting genotyping error rates become standard practice, and that effects of genotyping errors on inference be evaluated in restriction-enzyme-based SNP studies.
Ahmed, Qasim Zeeshan
2015-02-01
In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.
Changes realized from extended bit-depth and metal artifact reduction in CT
Energy Technology Data Exchange (ETDEWEB)
Glide-Hurst, C.; Chen, D.; Zhong, H.; Chetty, I. J. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan 48202 (United States)
2013-06-15
Purpose: High-Z material in computed tomography (CT) yields metal artifacts that degrade image quality and may cause substantial errors in dose calculation. This study couples a metal artifact reduction (MAR) algorithm with enhanced 16-bit depth (vs standard 12-bit) to quantify potential gains in image quality and dosimetry. Methods: Extended CT to electron density (CT-ED) curves were derived from a tissue characterization phantom with titanium and stainless steel inserts scanned at 90-140 kVp for 12- and 16-bit reconstructions. MAR was applied to sinogram data (Brilliance BigBore CT scanner, Philips Healthcare, v.3.5). Monte Carlo simulation (MC-SIM) was performed on a simulated double hip prostheses case (Cerrobend rods embedded in a pelvic phantom) using BEAMnrc/Dosxyz (400 000 0000 histories, 6X, 10 Multiplication-Sign 10 cm{sup 2} beam traversing Cerrobend rod). A phantom study was also conducted using a stainless steel rod embedded in solid water, and dosimetric verification was performed with Gafchromic film analysis (absolute difference and gamma analysis, 2% dose and 2 mm distance to agreement) for plans calculated with Anisotropic Analytic Algorithm (AAA, Eclipse v11.0) to elucidate changes between 12- and 16-bit data. Three patients (bony metastases to the femur and humerus, and a prostate cancer case) with metal implants were reconstructed using both bit depths, with dose calculated using AAA and derived CT-ED curves. Planar dose distributions were assessed via matrix analyses and using gamma criteria of 2%/2 mm. Results: For 12-bit images, CT numbers for titanium and stainless steel saturated at 3071 Hounsfield units (HU), whereas for 16-bit depth, mean CT numbers were much larger (e.g., titanium and stainless steel yielded HU of 8066.5 {+-} 56.6 and 13 588.5 {+-} 198.8 for 16-bit uncorrected scans at 120 kVp, respectively). MC-SIM was well-matched between 12- and 16-bit images except downstream of the Cerrobend rod, where 16-bit dose was {approx}6
International Nuclear Information System (INIS)
The security plans of nuclear plants generally require that all personnel who are to have unescorted access to protected areas or vital islands be screened for emotional instability. Screening typically consists of first administering the MMPI and then conducting a clinical interview. Interviews-by-exception protocols provide for only those employees to be interviewed who have some indications of psychopathology in their MMPI results. A problem arises when the indications are not readily apparent: False negatives are likely to occur, resulting in employees being erroneously granted unescorted access. The present paper describes the development of a predictive equation which permits accurate identification, via analysis of MMPI results, of those employees who are most in need of being interviewed. The predictive equation also permits knowing probably maximum false negative error rates when a given percentage of employees is interviewed
Evaluate the Word Error Rate of Binary Block Codes with Square Radius Probability Density Function
Chen, Xiaogang; Gu, Jian; Yang, Hongkui
2007-01-01
The word error rate (WER) of soft-decision-decoded binary block codes rarely has closed-form. Bounding techniques are widely used to evaluate the performance of maximum-likelihood decoding algorithm. But the existing bounds are not tight enough especially for low signal-to-noise ratios and become looser when a suboptimum decoding algorithm is used. This paper proposes a new concept named square radius probability density function (SR-PDF) of decision region to evaluate the WER. Based on the SR-PDF, The WER of binary block codes can be calculated precisely for ML and suboptimum decoders. Furthermore, for a long binary block code, SR-PDF can be approximated by Gamma distribution with only two parameters that can be measured easily. Using this property, two closed-form approximative expressions are proposed which are very close to the simulation results of the WER of interesting.
Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying
Fareed, Muhammad Mehboob
2014-06-01
In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.
Error-rate performance analysis of incremental decode-and-forward opportunistic relaying
Tourki, Kamel
2010-10-01
In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.
Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates
Energy Technology Data Exchange (ETDEWEB)
Zamanali, J.H. (Baltimore Gas and Electric, Lusby, MD (United States)); Hubbard, F.R. (FRH Inc., Baltimore, MD (United States)); Mosleh, A. (Univ. of Maryland, College Park (United States)); Waller, M.A. (Delta Prime, Inc., Glen Burnie, MD (United States))
1992-01-01
The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition.
Numerical optimization of writer and media for bit patterned magnetic recording
Kovacs, A.; Oezelt, H.; Schabes, M. E.; Schrefl, T.
2016-07-01
In this work, we present a micromagnetic study of the performance potential of bit-patterned (BP) magnetic recording media via joint optimization of the design of the media and of the magnetic write heads. Because the design space is large and complex, we developed a novel computational framework suitable for parallel implementation on compute clusters. Our technique combines advanced global optimization algorithms and finite-element micromagnetic solvers. Targeting data bit densities of 4 Tb/in2, we optimize designs for centered, staggered, and shingled BP writing. The magnetization dynamics of the switching of the exchange-coupled composite BP islands of the media is treated micromagnetically. Our simulation framework takes into account not only the dynamics of on-track errors but also the thermally induced adjacent-track erasure. With co-optimized write heads, the results show superior performance of shingled BP magnetic recording where we identify two particular designs achieving write bit-error rates of 1.5 ×10-8 and 8.4 ×10-8 , respectively. A detailed description of the key design features of these designs is provided and contrasted with centered and staggered BP designs which yielded write bit error rates of only 2.8 ×10-3 (centered design) and 1.7 ×10-2 (staggered design) even under optimized conditions.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Directory of Open Access Journals (Sweden)
Mohammad Rakibul Islam
2011-10-01
Full Text Available Cooperative communication in wireless sensor network (WSN explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN by exploiting multiple input multiple output (MIMO and multiple input single output (MISO configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO technique is proposed where low density parity check (LDPC code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Institute of Scientific and Technical Information of China (English)
LU; Zudi
2001-01-01
［1］Engle, R. F., Granger, C. W. J., Rice, J. et al., Semiparametric estimates of the relation between weather and electricity sales, Journal of the American Statistical Association, 1986, 81: 310.［2］Heckman, N. E., Spline smoothing in partly linear models, Journal of the Royal Statistical Society, Ser. B, 1986, 48: 244.［3］Rice, J., Convergence rates for partially splined models, Statistics & Probability Letters, 1986, 4: 203.［4］Chen, H., Convergence rates for parametric components in a partly linear model, Annals of Statistics, 1988, 16: 136.［5］Robinson, P. M., Root-n-consistent semiparametric regression, Econometrica, 1988, 56: 931.［6］Speckman, P., Kernel smoothing in partial linear models, Journal of the Royal Statistical Society, Ser. B, 1988, 50: 413.［7］Cuzick, J., Semiparametric additive regression, Journal of the Royal Statistical Society, Ser. B, 1992, 54: 831.［8］Cuzick, J., Efficient estimates in semiparametric additive regression models with unknown error distribution, Annals of Statistics, 1992, 20: 1129.［9］Chen, H., Shiau, J. H., A two-stage spline smoothing method for partially linear models, Journal of Statistical Planning & Inference, 1991, 27: 187.［10］Chen, H., Shiau, J. H., Data-driven efficient estimators for a partially linear model, Annals of Statistics, 1994, 22: 211.［11］Schick, A., Root-n consistent estimation in partly linear regression models, Statistics & Probability Letters, 1996, 28: 353.［12］Hamilton, S. A., Truong, Y. K., Local linear estimation in partly linear model, Journal of Multivariate Analysis, 1997, 60: 1.［13］Mills, T. C., The Econometric Modeling of Financial Time Series, Cambridge: Cambridge University Press, 1993, 137.［14］Engle, R. F., Autoregressive conditional heteroscedasticity with estimates of United Kingdom inflation, Econometrica, 1982, 50: 987.［15］Bera, A. K., Higgins, M. L., A survey of ARCH models: properties of estimation and testing, Journal of Economic
... keep the flea population down. Wearing an insect repellent also may help. Ask your parents to apply one that contains 10%–30% ... A Chigger Bit Me! Hey! A Mosquito Bit Me! Hey! A Tick Bit Me! What ...
Directory of Open Access Journals (Sweden)
Fatemeh Vizeshfar
2015-06-01
Full Text Available Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic information, self report faults, etiology of medication error and researcher observations. The results showed that nurses’ faults in pediatric wards were 51/6% and in adults wards were 47/4%. The most common faults in adults wards were later or sooner drug administration (48/6%, and administration of drugs without prescription and administering wrong drugs were the most common medication errors in pediatric wards (each one 49/2%. According to researchers’ observations, the medication error rate of 57/9% was rated low in adults wards and the rate of 69/4% in pediatric wards was rated moderate. The most frequent medication errors in both adults and pediatric wards were that nurses didn’t explain the reason and type of drug they were going to administer to patients. Independent T-test showed a significant change in faults observations in pediatric wards (p=0.000 and in adults wards (p=0.000. Several studies have shown medication errors all over the world, especially in pediatric wards. However, by designing a suitable report system and use a multi disciplinary approach, we can be reduced the occurrence of medication errors and its negative consequences.
Bias and spread in extreme value theory measurements of probability of error
Smith, J. G.
1972-01-01
Extreme value theory is examined to explain the cause of the bias and spread in performance of communications systems characterized by low bit rates and high data reliability requirements, for cases in which underlying noise is Gaussian or perturbed Gaussian. Experimental verification is presented and procedures that minimize these effects are suggested. Even under these conditions, however, extreme value theory test results are not particularly more significant than bit error rate tests.
Directory of Open Access Journals (Sweden)
Casey P Durand
Full Text Available INTRODUCTION: Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. METHODS: A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. RESULTS: In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. CONCLUSIONS: Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.
A Coded Bit-Loading Linear Precoded Discrete Multitone Solution for Power Line Communication
Muhammad, Fahad Syed; Hélard, Jean-François; Crussière, Matthieu
2008-01-01
Linear precoded discrete multitone modulation (LP-DMT) system has been already proved advantageous with adaptive resource allocation algorithm in a power line communication (PLC) context. In this paper, we investigate the bit and energy allocation algorithm of an adaptive LP-DMT system taking into account the channel coding scheme. A coded adaptive LP-DMT system is presented in the PLC context with a loading algorithm which ccommodates the channel coding gains in bit and energy calculations. The performance of a concatenated channel coding scheme, consisting of an inner Wei's 4-dimensional 16-states trellis code and an outer Reed-Solomon code, in combination with the roposed algorithm is analyzed. Simulation results are presented for a fixed target bit error rate in a multicarrier scenario under power spectral density constraint. Using a multipath model of PLC channel, it is shown that the proposed coded adaptive LP-DMT system performs better than classical coded discrete multitone.
Burton A. Abrams; Siyan Wang
2006-01-01
In this paper, we investigate the relationship between government size and the unemployment rate using a structural error correction model that describes both the short-run dynamics and long-run determination of the unemployment rate. Using data from twenty OECD countries from 1970 to 1999, we find that government size, measured as total government outlays as a percentage of GDP, plays a significant role in affecting the steady-state unemployment rate. We disaggregate government outlays and f...
Directory of Open Access Journals (Sweden)
VINOTH BABU K.
2016-04-01
Full Text Available Multi input multi output (MIMO and orthogonal frequency division multiplexing (OFDM are the key techniques for the future wireless communication systems. Previous research in the above areas mainly concentrated on spectral efficiency improvement and very limited work has been done in terms of energy efficient transmission. In addition to spectral efficiency improvement, energy efficiency improvement has become an important research because of the slow progressing nature of the battery technology. Since most of the user equipments (UE rely on battery, the energy required to transmit the target bits should be minimized to avoid quick battery drain. The frequency selective fading nature of the wireless channel reduces the spectral and energy efficiency of OFDM based systems. Dynamic bit loading (DBL is one of the suitable solution to improve the spectral and energy efficiency of OFDM system in frequency selective fading environment. Simple dynamic bit loading (SDBL algorithm is identified to offer better energy efficiency with less system complexity. It is well suited for fixed data rate voice/video applications. When the number of target bits are very much larger than the available subcarriers, the conventional single input single output (SISO-SDBL scheme offers high bit error rate (BER and needs large transmit energy. To improve bit error performance we combine space frequency block codes (SFBC with SDBL, where the adaptations are done in both frequency and spatial domain. To improve the quality of service (QoS further, optimal transmit antenna selection (OTAS scheme is also combined with SFBC-SDBL scheme. The simulation results prove that the proposed schemes offer better QoS when compared to the conventional SISOSDBL scheme.
Directory of Open Access Journals (Sweden)
P. N. V. M SASTRY
2015-01-01
Full Text Available The Aim is to HDL Design & Implementation for Exa Bit Rate Multichannel 64:1 LVDS Data Serializer & De-Serializer ASIC Array Card for Ultra High Speed Wireless Communication Products like Network On Chip Routers, Data Bus Communication Interface Applications, Cloud Computing Networks , Zetta bit Ethernet at Zetta Bit Rate Of Data Transfer Speed. Basically This Serializer Array Converts 64 bit parallel Data Array in to Serial Array Form on Transmitter Side and Transmission Done through High Speed Wireless Serial Communication Link and also Converts this Same Serial Array Data into Parallel Data Array on the Receiver Side by De-Serializer Array ASIC without any noise, also measure Very High Compressed Jitter Tolerance & Eye Diagram, Bit Error Rate through Analyzer. This LVDS Data SER-De-SER mainly used in High Speed Bus Communication Protocol Transceivers, Interface FPGA Add On Cards. The Process Of Design is Implemented through Verilog HDL / VHDL, Programming & Debugging Done Latest FPGA Board.
Step angles to reduce the north-finding error caused by rate random walk with fiber optic gyroscope.
Wang, Qin; Xie, Jun; Yang, Chuanchuan; He, Changhong; Wang, Xinyue; Wang, Ziyu
2015-10-20
We study the relationship between the step angles and the accuracy of north finding with fiber optic gyroscopes. A north-finding method with optimized step angles is proposed to reduce the errors caused by rate random walk (RRW). Based on this method, the errors caused by both angle random walk and RRW are reduced by increasing the number of positions. For when the number of positions is even, we proposed a north-finding method with symmetric step angles that can reduce the error caused by RRW and is not affected by the azimuth angles. Experimental results show that, compared with the traditional north-finding method, the proposed methods with the optimized step angles and the symmetric step angles can reduce the north-finding errors by 67.5% and 62.5%, respectively. The method with symmetric step angles is not affected by the azimuth angles and can offer consistent high accuracy for any azimuth angles.
Serialized quantum error correction protocol for high-bandwidth quantum repeaters
Glaudell, A. N.; Waks, E.; Taylor, J. M.
2016-09-01
Advances in single-photon creation, transmission, and detection suggest that sending quantum information over optical fibers may have losses low enough to be correctable using a quantum error correcting code (QECC). Such error-corrected communication is equivalent to a novel quantum repeater scheme, but crucial questions regarding implementation and system requirements remain open. Here we show that long-range entangled bit generation with rates approaching 108 entangled bits per second may be possible using a completely serialized protocol, in which photons are generated, entangled, and error corrected via sequential, one-way interactions with as few matter qubits as possible. Provided loss and error rates of the required elements are below the threshold for quantum error correction, this scheme demonstrates improved performance over transmission of single photons. We find improvement in entangled bit rates at large distances using this serial protocol and various QECCs. In particular, at a total distance of 500 km with fiber loss rates of 0.3 dB km-1, logical gate failure probabilities of 10-5, photon creation and measurement error rates of 10-5, and a gate speed of 80 ps, we find the maximum single repeater chain entangled bit rates of 51 Hz at a 20 m node spacing and 190 000 Hz at a 43 m node spacing for the {[[3,1,2
Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L.; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L.; Guerin, Bastien
2016-01-01
Purpose A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. Methods The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors (“worst-case SAR”) is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Results Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled “worst-case SAR” in the presence of errors of this magnitude at minor cost of the excitation profile quality. Conclusion Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. PMID:26147916
Soury, Hamza
2014-06-01
This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox\\'s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.
Yao, Zhuxi; Yuan, Yi; Buchanan, Tony W; Zhang, Kan; Zhang, Liang; Wu, Jianhui
2016-01-01
High-stress jobs require both appropriate physiological regulation and behavioral adjustment to meet the demands of emergencies. Here, we investigated the relationship between the autonomic stress response and behavioral adjustment after errors in special police cadets. Sixty-eight healthy male special police cadets were randomly assigned to perform a first-time walk on an aerial rope bridge to induce stress responses or a walk on a cushion on the ground serving as a control condition. Subsequently, the participants completed a Go/No-go task to assess behavioral adjustment after false alarm responses. Heart rate measurements and subjective reports confirmed that stress responses were successfully elicited by the aerial rope bridge task in the stress group. In addition, greater heart rate increases during the rope bridge task were positively correlated with post-error slowing and had a trend of negative correlation with post-error miss rate increase in the subsequent Go/No-go task. These results suggested that stronger autonomic stress responses are related to better post-error adjustment under acute stress in this highly selected population and demonstrate that, under certain conditions, individuals with high-stress jobs might show cognitive benefits from a stronger physiological stress response. PMID:27428280
Sharp Threshold Detection Based on Sup-norm Error rates in High-dimensional Models
DEFF Research Database (Denmark)
Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl;
We propose a new estimator, the thresholded scaled Lasso, in high dimensional threshold regressions. First, we establish an upper bound on the sup-norm estimation error of the scaled Lasso estimator of Lee et al. (2012). This is a non-trivial task as the literature on highdimensional models has...... and private) and GDP growth....
Groen, Yvonne; Mulder, Lambertus J. M.; Wijers, Albertus A.; Minderaa, Ruud B.; Althaus, Monika
2009-01-01
Attention Deficit Hyperactivity Disorder (ADHD) is a developmental disorder that has previously been related to a decreased sensitivity to errors and feedback. Supplementary to the traditional performance measures, this study uses autonomic measures to study this decreased sensitivity in ADHD and th
Error resilient H.264/AVC Video over Satellite for low Packet Loss Rates
DEFF Research Database (Denmark)
Aghito, Shankar Manuel; Forchhammer, Søren; Andersen, Jakob Dahl
2007-01-01
The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The ne...
Drill bits technology - introduction of the new kymera hybrid bit
Nguyen, Don Tuan
2012-01-01
The early concepts of hybrid bits date back to the 1930’s but have only been a viable drilling tool with recent polycrystalline diamond compact technology. Improvements in drilling performance around the world continue to focus on stability and efficiency in key applications. This thesis briefly describes a new generation of hybrid bits that are based on PDC bit design combined with roller cones. Bit related failure is a common problem in today’s drilling environment, leading to inefficien...
Demonstration of a Bit-Flip Correction for Enhanced Sensitivity Measurements
Cohen, L; Istrati, D; Retzker, A; Eisenberg, H S
2016-01-01
The sensitivity of classical and quantum sensing is impaired in a noisy environment. Thus, one of the main challenges facing sensing protocols is to reduce the noise while preserving the signal. State of the art quantum sensing protocols that rely on dynamical decoupling achieve this goal under the restriction of long noise correlation times. We implement a proof of principle experiment of a protocol to recover sensitivity by using an error correction for photonic systems that does not have this restriction. The protocol uses a protected entangled qubit to correct a bit-flip error. Our results show a recovery of about 87% of the sensitivity, independent of the noise rate.
Williams, J M
1999-01-01
In the particle in the box problem, the particle is not in both boxes at the same time as some would have you believe. It is a set definition situation with the two boxes being part of a set that also contains a particle. Set and subset differences are explored. Atomic electron orbitals can be mimicked by roulette wheel probability; thus ELECTRONIC ROULETTE. 0 and 00 serve as boundary limits and are on opposite sides of the central core - a point that quantum physics ignores. Considering a stray marble on the floor as part of the roulette wheel menage is taking assumptions a bit too far. Likewise, the attraction between a positive and negative charge at distance does not make the negative charge part of the positive charge's orbital system. This, of course, is contrary to the stance of current quantum physics methodology that carries this orbital association a bit too far.
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
Institute of Scientific and Technical Information of China (English)
Yang Yukun; Han Tao
1995-01-01
@@ The geologic condition of Shengli Oilfield (SLOF)is complicated and the range of the rock drillability is wide. For more than 20 years,Shengli Drilling Technology Research Institute, in view of the formation conditions of SLOF,has done a lot of effort and obtained many achivements in design,manufacturing technology and field service. Up to now ,the institute has developed several ten kinds of diamond bits applicable for drilling and coring in formations from extremely soft to hard.
Tyson, Jon
2009-01-01
We compare several instances of pure-state Belavkin weighted square-root measurements from the standpoint of minimum-error discrimination of quantum states. The quadratically weighted measurement is proven superior to the so-called "pretty good measurement" (PGM) in a number of respects: (1) Holevo's quadratic weighting unconditionally outperforms the PGM in the case of two-state ensembles, with equality only in trivial cases. (2) A converse of a theorem of Holevo is proven, showing that a we...
Adaptive Error Resilience for Video Streaming
Directory of Open Access Journals (Sweden)
Lakshmi R. Siruvuri
2009-01-01
Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.
Parallel Bit Interleaved Coded Modulation
Ingber, Amir
2010-01-01
A new variant of bit interleaved coded modulation (BICM) is proposed. In the new scheme, called Parallel BICM, L identical binary codes are used in parallel using a mapper, a newly proposed finite-length interleaver and a binary dither signal. As opposed to previous approaches, the scheme does not rely on any assumptions of an ideal, infinite-length interleaver. Over a memoryless channel, the new scheme is proven to be equivalent to a binary memoryless channel. Therefore the scheme enables one to easily design coded modulation schemes using a simple binary code that was designed for that binary channel. The overall performance of the coded modulation scheme is analytically evaluated based on the performance of the binary code over the binary channel. The new scheme is analyzed from an information theoretic viewpoint, where the capacity, error exponent and channel dispersion are considered. The capacity of the scheme is identical to the BICM capacity. The error exponent of the scheme is numerically compared to...
The type I error rate for in vivo Comet assay data when the hierarchical structure is disregarded
Hansen, Merete Kjær; Kulahci, Murat
2014-01-01
The Comet assay is a sensitive technique for detection of DNA strand breaks. The experimental design of in vivo Comet assay studies are often hierarchically structured, which should be reWected in the statistical analysis. However, the hierarchical structure sometimes seems to be disregarded, and this imposes considerable impact on the type I error rate. This study aims to demonstrate the implications that result from disregarding the hierarchical structure. DiUerent combinations of the facto...
Cameron, Kenneth L.; Peck, Karen Y.; Owens, Brett D; Svoboda, Steven J.; DiStefano, Lindsay J.; Stephen W Marshall; de la Motte, Sarah; Beutler, Anthony I.; Padua, Darin A.
2014-01-01
Objectives: Lower-extremity stress fracture injuries are a major cause of morbidity in physically active populations. The ability to efficiently screen for modifiable risk factors associated with injury is critical in developing and implementing effective injury prevention programs. The purpose of this study was to determine if baseline Landing Error Scoring System (LESS) scores were associated with the incidence rate of lower-extremity stress fracture during four years of follow-up. Methods:...
Sharp threshold detection based on sup-norm error rates in high-dimensional models
DEFF Research Database (Denmark)
Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl;
2016-01-01
We propose a new estimator, the thresholded scaled Lasso, in high dimensional threshold regressions. First, we establish an upper bound on the ℓ∞ estimation error of the scaled Lasso estimator of Lee et al. (2015). This is a non-trivial task as the literature on high-dimensional models has focused...... selection via thresholding. Our simulations show that thresholding the scaled Lasso yields substantial improvements in terms of variable selection. Finally, we use our estimator to shed further empirical light on the long running debate on the relationship between the level of debt (public and private...
Improved Energy Efficiency for Optical Transport Networks by Elastic Forward Error Correction
DEFF Research Database (Denmark)
Rasmussen, Anders; Yankov, Metodi Plamenov; Berger, Michael Stübert;
2014-01-01
the balance between effective data rate and FEC coding gain without any disruption to the live traffic. As a consequence, these automatic adjustments can be performed very often based on the current traffic demand and bit error rate performance of the links through the network. The FEC scheme itself...
Institute of Scientific and Technical Information of China (English)
李松斌; 黄永峰; 卢记仓
2013-01-01
QIM(Quantization Index Modulation,量化索引调制)隐写在标量或矢量量化时嵌入机密信息,可在语音压缩编码过程中进行高隐蔽性的信息隐藏,文中试图对该种隐写进行检测.文中发现该种隐写将导致压缩语音流中的音素分布特性发生改变,提出了音素向量空间模型和音素状态转移模型对音素分布特性进行了量化表示.基于所得量化特征并结合SVM(Support Vector Machine,支持向量机)构建了隐写检测器.针对典型的低速率语音编码标准G.729以及G.723.1的实验表明,文中方法性能远优于现有检测方法,实现了对QIM隐写的快速准确检测.%Quantization Index Modulation (QIM) steganography,which embeds the secret information during the Vector Quantization,can hide information in low bit-rate speech codec with high imperceptibility.This paper tries to detect this type of steganography.For this purpose,starting from the speech generation and compress coding theory,this paper firstly analyzes the possible significant feature degradation through the QIM steganography in compressed audio stream deeply.And it finds that the QIM steganography will disturb the phoneme sequence in the stream,and inevitably make the imbalance and correlation characteristics of phoneme distribution in the sequence change.According to this discovery,this paper adopts the phoneme distribution characteristics as the key for the detection of the QIM steganography.In order to get the quantitative features of phoneme distribution characteristics,this paper designs the Phoneme Vector Space Model and the Phoneme State Transition Model to quantify the imbalance and correlation characteristics respectively.By combining the quantitative vector features with supervised learning classifier,this paper builds a high performance detector towards the QIM steganography in low bit-rate speech codec.The experiments show that,for the two typical low bit-rate speech codec:G.729 and G.723.1,the
Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok
2005-08-01
Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of
Evaluation of Bit Preservation Strategies
DEFF Research Database (Denmark)
Zierau, Eld; Kejser, Ulla Bøgvad; Kulovits, Hannes
2010-01-01
This article describes a methodology which supports evaluation of bit preservation strategies for different digital materials. This includes evaluation of alternative bit preservation solution. The methodology presented uses the preservation planning tool Plato for evaluations, and a BR......-ReMS prototype to calculate measures for how well bit preservation requirements are met. Planning storage of data as part of preservation planning involves classification of data with regard to requirements on confidentiality, bit safety, available and costs. Choice of storage with such parameters is quite...... complex since e.g. more copies of data means better bit safety, but higher cost and bigger risk of breaking confidentiality. Based on a case of a bit repository offering varied bit preservation solutions, the article will present results of using the methodology to make plans and choices of alternatives...
Tyson, Jon
2009-01-01
We compare several instances of pure-state Belavkin weighted square-root measurements from the standpoint of minimum-error discrimination of quantum states. The quadratically weighted measurement is proven superior to the so-called "pretty good measurement" (PGM) in a number of respects: (1) Holevo's quadratic weighting unconditionally outperforms the PGM in the case of two-state ensembles, with equality only in trivial cases. (2) A converse of a theorem of Holevo is proven, showing that a weighted measurement is asymptotically optimal only if it is quadratically weighted. Counterexamples for three states are constructed. The cube-weighted measurement of Ballester, Wehner, and Winter is also considered. Sufficient optimality conditions for various weights are compared.
Duyck, Dieter; Capirone, Daniele; Moeneclaey, Marc
2012-01-01
Joint network-channel codes (JNCC) can improve the performance of communication in wireless networks, by combining, at the physical layer, the channel codes and the network code as an overall error-correcting code. JNCC is increasingly proposed as an alternative to a standard layered construction, such as the OSI-model. The main performance metrics for JNCCs are scalability to larger networks and error rate. The diversity order is one of the most important parameters determining the error rate. The literature on JNCC is growing, but a rigorous diversity analysis is lacking, mainly because of the many degrees of freedom in wireless networks, which makes it very hard to prove general statements on the diversity order. In this paper, we consider a network with slowly varying fading point-to-point links, where all sources also act as relay and additional non-source relays may be present. We propose a general structure for JNCCs to be applied in such network. In the relay phase, each relay transmits a linear trans...
Carrier Synchronization for 3-and 4-bit-per-Symbol Optical Transmission
Ip, Ezra; Kahn, Joseph M.
2005-12-01
We investigate carrier synchronization for coherent detection of optical signals encoding 3 and 4 bits/symbol. We consider the effects of laser phase noise and of additive white Gaussian noise (AWGN), which can arise from local oscillator (LO) shot noise or LO-spontaneous beat noise. We identify 8-and 16-ary quadrature amplitude modulation (QAM) schemes that perform well when the receiver phase-locked loop (PLL) tracks the instantaneous signal phase with moderate phase error. We propose implementations of 8-and 16-QAM transmitters using Mach-Zehnder (MZ) modulators. We outline a numerical method for computing the bit error rate (BER) of 8-and 16-QAM in the presence of AWGN and phase error. It is found that these schemes can tolerate phase-error standard deviations of 2.48° and 1.24°, respectively, for a power penalty of 0.5 dB at a BER of 10-9. We propose a suitable PLL design and analyze its performance, taking account of laser phase noise, AWGN, and propagation delay within the PLL. Our analysis shows that the phase error depends on the constellation penalty, which is the mean power of constellation symbols times the mean inverse power. We establish a procedure for finding the optimal PLL natural frequency, and determine tolerable laser linewidths and PLL propagation delays. For zero propagation delay, 8-and 16-QAM can tolerate linewidth-to-bit-rate ratios of 1.8 × 10-5 and 1.4 × 10-6, respectively, assuming a total penalty of 1.0 dB.
Joint adaptive modulation and diversity combining with feedback error compensation
Choi, Seyeong
2009-11-01
This letter investigates the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of error-free feedback channels. We also propose to utilize adaptive diversity to compensate for the performance degradation due to feedback error. We accurately quantify the performance of the joint AMDC scheme in the presence of feedback error, in terms of the average number of combined paths, the average spectral efficiency, and the average bit error rate. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. It is observed that the proposed compensation strategy can offer considerable error performance improvement with little loss in processing power and spectral efficiency in comparison with the no compensation case. Copyright © 2009 IEEE.
Improving Residual Error Rate of CAN Protocol%CAN协议的错帧漏检率改进
Institute of Scientific and Technical Information of China (English)
杨福宇
2011-01-01
Limited literatures were published about the research of undetected frame error rate of CAN protocol. They were based on the software fault injection. Though the simulation was exhausting, but in comparing with the possible quantity of error cases it was a very small sampling. Hence the conclusion based on that simulation is less persuasive. Present paper gives a reconstruct method of error undetected frame. Based on this method the lower boundary of undetected error rate obtained is several orders higher than that claimed by Bosch specification 2. 0. This has an impaction on the user. As the applications are so widely in use, it is an urgent need to fix this problem. Present paper provides a software patch. It radically eliminates the stuffing rule's disturbs on CRC check.%过去对CAN的漏检错帧概率的研究很有限,数据的获得主要依靠大量的仿真测试.由于要仿真的量太大,实际上仿真的仍然是极小的样本,所以得到的漏检错帧概率可信性不足.本文介绍了漏检实例的构造方法,从而进行漏检错帧概率下限的分析计算.得到的CAN协议的漏检错帧概率远大于以前的结论,因此对CAN的应用有巨大的冲击.由于已有大量应用必须加以改进,提出了改进的软件补救措施,它从根本上解决了填充规则对CAN错帧漏检率的影响.
Energy Technology Data Exchange (ETDEWEB)
Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL
2015-01-01
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
16 Bits DAC s Design, Simulation and Layout
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The high speed and precision 16 bits DAC will be applied in DSP (Digital Signal Processing) based on CSR pulsed power supply control system. In this application the DAC is needed to work in 1 μs’ converting data rate, 16 bit resolution and its output voltage is 10 volts.
On the feedback error compensation for adaptive modulation and coding scheme
Choi, Seyeong
2011-11-25
In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify the performance of two joint AMDC schemes in the presence of feedback error, in terms of the average spectral efficiency, the average number of combined paths, and the average bit error rate. The benefit of feedback error compensation with adaptive combining is also quantified. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. Copyright (c) 2011 John Wiley & Sons, Ltd.
Analysis of error performance on Turbo coded FDPIM
Institute of Scientific and Technical Information of China (English)
ZHU Yin-bing; WANG Hong-Xing; ZHANG Tie-Ying
2008-01-01
Due to variable symbol length of digital pulse interval modulation(DPIM), it is difficult to analyze the error performances of Turbo ceded DPIM. To solve this problem, a fixed-length digital pulse interval modulation(FDPIM) method is provided.The FDPIM modulation structure is introduced. The packet error rates of uncoded FDPIM are analyzed and compared with that of DPIM. Bit error rates of Turbo coded FDPIM are simulated based on three kinds of analytical models under weak turbulence channel. The results show that packet error rate of uncoded FDPIM is inferior to that of uncoded DPIM.However, FDPIM is easy to be implemented and easy to be combined, with Turbo code for soft-decision because of its fixed length. Besides, the introduction of Turbo code in this modulation can decrease the average power about 10 dBm,which means that it can improve the error performance of the system effectively.
Lau, KN
1999-01-01
We have evaluated the information theoretical performance of variable rate adaptive channel coding for Rayleigh fading channels. The channel states are detected at the receiver and fed back to the transmitter by means of a noiseless feedback link. Based on the channel state informations, the transmitter can adjust the channel coding scheme accordingly. Coherent channel and arbitrary channel symbols with a fixed average transmitted power constraint are assumed. The channel capacity and the err...
Giga-bit optical data transmission module for Beam Instrumentation
Roedne, L T; Cenkeramaddi, L R; Jiao, L
Particle accelerators require electronic instrumentation for diagnostic, assessment and monitoring during operation of the transferring and circulating beams. A sensor located near the beam provides an electrical signal related to the observable quantity of interest. The front-end electronics provides analog-to-digital conversion of the quantity being observed and the generated data are to be transferred to the external digital back-end for data processing, and to display to the operators and logging. This research project investigates the feasibility of radiation-tolerant giga-bit data transmission over optic fibre for beam instrumentation applications, starting from the assessment of the state of the art technology, identification of challenges and proposal of a system level solution, which should be validated with a PCB design in an experimental setup. Radiation tolerance of 10 kGy (Si) Total Ionizing Dose (TID) over 10 years of operation, Bit Error Rate (BER) 10-6 or better. The findings and results of th...
Institute of Scientific and Technical Information of China (English)
郝万明; 杨守义
2014-01-01
For orthogonal frequency division multiple access( OFDMA)-based cognitive radio systems, the transmission rate of every cognitive user must be an integer in practice, and the single cognitive user is on-ly considered in previous rate rounding algorithm. According to this situation, a new rate rounding algo-rithm is proposed in this paper, and it is modified based on the previous algorithm. Every subcarrier rate is adjusted once at most, which ensures the fairness at the rate rounding between cognitive users, and the to-tal bit rate is also improved. The simulation result shows that the fairness among cognitive users is im-proved effectively by the proposed algorithm.%在基于正交频分多址( OFDMA)的认知无线电系统中，每个认知用户在实际中都是以整数比特进行传输，而以往的速率取整算法只考虑了单认知用户。针对这种情况，提出了一种新的速率取整算法，该算法在原有算法的基础上进行了改进，让每个子载波最多参与一次速率的调整，从而使其在应用于多认知用户时保证了速率取整时的公平性，同时总的传输比特率比原算法有了一定的提高。仿真结果表明，所提算法有效提高了各认知用户在速率取整时的公平性。
Directory of Open Access Journals (Sweden)
Demirhan Erdal
2015-01-01
Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.
A NEW LABELING SEARCH ALGORITHM FOR BIT-INTERLEAVED CODED MODULATION WITH ITERATIVE DECODING
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Bit-Interleaved Coded Modulation with Iterative Decoding (BICM-ID) is a bandwidth efficient transmission, where the bit error rate is reduced through the iterative information exchange between the inner demapper and the outer decoder. The choice of the symbol mapping is the crucial design parameter. This paper indicates that the Harmonic Mean of the Minimum Squared Euclidean (HMMSE) distance is the best criterion for the mapping design. Based on the design criterion of the HMMSE distance, a new search algorithm to find the optimized labeling maps for BICM-ID system is proposed. Numerical results and performance comparison show that the new labeling search method has a low complexity and outperforms other labeling schemes using other design criterion in BICM-ID system, therefore it is an optimized labeling method.
Analysis of bit-rock interaction during stick-slip vibrations using PDC cutting force model
Energy Technology Data Exchange (ETDEWEB)
Patil, P.A.; Teodoriu, C. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany). ITE
2013-08-01
Drillstring vibration is one of the limiting factors maximizing the drilling performance and also causes premature failure of drillstring components. Polycrystalline diamond compact (PDC) bit enhances the overall drilling performance giving the best rate of penetrations with less cost per foot but the PDC bits are more susceptible to the stick slip phenomena which results in high fluctuations of bit rotational speed. Based on the torsional drillstring model developed using Matlab/Simulink for analyzing the parametric influence on stick-slip vibrations due to drilling parameters and drillstring properties, the study of relations between weight on bit, torque on bit, bit speed, rate of penetration and friction coefficient have been analyzed. While drilling with the PDC bits, the bit-rock interaction has been characterized by cutting forces and the frictional forces. The torque on bit and the weight on bit have both the cutting component and the frictional component when resolved in horizontal and vertical direction. The paper considers that the bit is undergoing stick-slip vibrations while analyzing the bit-rock interaction of the PDC bit. The Matlab/Simulink bit-rock interaction model has been developed which gives the average cutting torque, T{sub c}, and friction torque, T{sub f}, values on cutters as well as corresponding average weight transferred by the cutting face, W{sub c}, and the wear flat face, W{sub f}, of the cutters value due to friction.
Error Locked Encoder and Decoder for Nanomemory Application
Directory of Open Access Journals (Sweden)
Y. Sharath
2014-03-01
Full Text Available Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. We introduce a new approach to design fault-secure encoder and decoder circuitry for memory designs. The key novel contribution of this paper is identifying and defining a new class of error-correcting codes whose redundancy makes the design of fault-secure detectors (FSD particularly simple. We further quantify the importance of protecting encoder and decoder circuitry against transient errors, illustrating a scenario where the system failure rate (FIT is dominated by the failure rate of the encoder and decoder. We prove that Euclidean Geometry Low-Density Parity-Check (EG-LDPC codes have the fault-secure detector capability. Using some of the smaller EG-LDPC codes, we can tolerate bit or nanowire defect rates of 10% and fault rates of 10-18 upsets/device/cycle, achieving a FIT rate at or below one for the entire memory system and a memory density of 1011 bit/cm with nanowire pitch of 10 nm for memory blocks of 10 Mb or larger. Larger EG-LDPC codes can achieve even higher reliability and lower area overhead.
Directory of Open Access Journals (Sweden)
Johanna I Westbrook
2012-01-01
Full Text Available BACKGROUND: Considerable investments are being made in commercial electronic prescribing systems (e-prescribing in many countries. Few studies have measured or evaluated their effectiveness at reducing prescribing error rates, and interactions between system design and errors are not well understood, despite increasing concerns regarding new errors associated with system use. This study evaluated the effectiveness of two commercial e-prescribing systems in reducing prescribing error rates and their propensities for introducing new types of error. METHODS AND RESULTS: We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders and clinical (e.g., wrong dose, wrong drug errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious by hospital and study period; and rates and categories of postintervention "system-related" errors (where system functionality or design contributed to the error were calculated. Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards (respectively reductions of 66.1% [95% CI 53.9%-78.3%]; 57.5% [33.8%-81.2%]; and 60.5% [48.5%-72.4%]. The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission (95% CI 5.23-7.28 to 2.12 (95% CI 1.71-2.54; p<0.0001 and at Hospital B from 3.62 (95% CI 3.30-3.93 to 1.46 (95% CI 1.20-1.73; p<0.0001. This
Test results judgment method based on BIT faults
Institute of Scientific and Technical Information of China (English)
Wang Gang; Qiu Jing; Liu Guanjun; Lyu Kehong
2015-01-01
Built-in-test (BIT) is responsible for equipment fault detection, so the test data correct-ness directly influences diagnosis results. Equipment suffers all kinds of environment stresses, such as temperature, vibration, and electromagnetic stress. As embedded testing facility, BIT also suffers from these stresses and the interferences/faults are caused, so that the test course is influenced, resulting in incredible results. Therefore it is necessary to monitor test data and judge test failures. Stress monitor and BIT self-diagnosis would redound to BIT reliability, but the existing anti-jamming researches are mainly safeguard design and signal process. This paper focuses on test results monitor and BIT equipment (BITE) failure judge, and a series of improved approaches is proposed. Firstly the stress influences on components are illustrated and the effects on the diagnosis results are summarized. Secondly a composite BIT program is proposed with information integra-tion, and a stress monitor program is given. Thirdly, based on the detailed analysis of system faults and forms of BIT results, the test sequence control method is proposed. It assists BITE failure judge and reduces error probability. Finally the validation cases prove that these approaches enhance credibility.
Assessment of error rates in acoustic monitoring with the R package monitoR
Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese
2016-01-01
Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were for song event detection.
Positional Information, in bits
Dubuis, Julien; Bialek, William; Wieschaus, Eric; Gregor, Thomas
2010-03-01
Pattern formation in early embryonic development provides an important testing ground for ideas about the structure and dynamics of genetic regulatory networks. Spatial variations in the concentration of particular transcription factors act as ``morphogens,'' driving more complex patterns of gene expression that in turn define cell fates, which must be appropriate to the physical location of the cells in the embryo. Thus, in these networks, the regulation of gene expression serves to transmit and process ``positional information.'' Here, using the early Drosophila embryo as a model system, we measure the amount of positional information carried by a group of four genes (the gap genes Hunchback, Kr"uppel, Giant and Knirps) that respond directly to the primary maternal morphogen gradients. We find that the information carried by individual gap genes is much larger than one bit, so that their spatial patterns provide much more than the location of an ``expression boundary.'' Preliminary data indicate that, taken together these genes provide enough information to specify the location of every row of cells along the embryo's anterior-posterior axis.
Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams
Directory of Open Access Journals (Sweden)
Zeng Bing
2006-01-01
Full Text Available This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream, our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth and high robustness (under varying and/or unclean channel conditions.
Reinforcement Learning in BitTorrent Systems
Izhak-Ratzin, Rafit; van der Schaar, Mihaela
2010-01-01
Recent research efforts have shown that the popular BitTorrent protocol does not provide fair resource reciprocation and may allow free-riding. In this paper, we propose a BitTorrent-like protocol that replaces the peer selection mechanisms in the regular BitTorrent protocol with a novel reinforcement learning (RL) based mechanism. Due to the inherent opration of P2P systems, which involves repeated interactions among peers over a long period of time, the peers can efficiently identify free-riders as well as desirable collaborators by learning the behavior of their associated peers. Thus, it can help peers improve their download rates and discourage free-riding, while improving fairness in the system. We model the peers' interactions in the BitTorrent-like network as a repeated interaction game, where we explicitly consider the strategic behavior of the peers. A peer, which applies the RL-based mechanism, uses a partial history of the observations on associated peers' statistical reciprocal behaviors to deter...
Simulation of DA DCT Using ECAT for Reducing the Truncation Errors
Directory of Open Access Journals (Sweden)
K. V. S. P. Pravallika
2013-03-01
Full Text Available Discrete cosine transform (DCT is widely used in image and video compression applications. The paper mainly deals with implementation of image compression application based on Distributed Arithmetic (DA DCT using Error Compensated Adder Tree (ECAT and simulating it to achieve low error rate. Distributed Arithmetic (DA based Error Compensated Adder Tree (ECAT is operates shifting and addition in parallel instead of using multipliers where the complexity is reduced. The proposed architecture deals with 9 bit input and 12 bit output where it meets the Peak Signal to Noise Ratio (PSNR requirements. Advantages of ECAT based DA-DCT are low error rate and improved speed in image and video compression applications. The project is implemented in Verilog HDL language and simulated in ModelSim XE III 6.4b. The project synthesis is done using Xilinx ISE 10.1. The results obtained were evaluated with the help of MATLAB
Bit Preservation: A Solved Problem?
Directory of Open Access Journals (Sweden)
David S. H. Rosenthal
2010-07-01
Full Text Available For years, discussions of digital preservation have routinely featured comments such as “bit preservation is a solved problem; the real issues are ...”. Indeed, current digital storage technologies are not just astoundingly cheap and capacious, they are astonishingly reliable. Unfortunately, these attributes drive a kind of “Parkinson’s Law” of storage, in which demands continually push beyond the capabilities of systems implementable at an affordable price. This paper is in four parts:Claims, reviewing a typical claim of storage system reliability, showing that it provides no useful information for bit preservation purposes.Theory, proposing “bit half-life” as an initial, if inadequate, measure of bit preservation performance, expressing bit preservation requirements in terms of it, and showing that the requirements being placed on bit preservation systems are so onerous that the experiments required to prove that a solution exists are not feasible.Practice, reviewing recent research into how well actual storage systems preserve bits, showing that they fail to meet the requirements by many orders of magnitude.Policy, suggesting ways of dealing with this unfortunate situation.
Unequal Error Protection for Compressed Video over Noisy Channels
Vosoughi, Arash
2015-01-01
The huge amount of data embodied in a video signal is by far the biggest burden on existing wireless communication systems. Adopting an efficient video transmission strategy is thus crucial in order to deliver video data at the lowest bit rate and the highest quality possible. Unequal error protection (UEP) is a powerful tool in this regard, whose ultimate goal is to wisely provide a stronger protection for the more important data, and a weaker protection for the less important data carried b...
The Error-Pattern-Correcting Turbo Equalizer
Alhussien, Hakim
2010-01-01
The error-pattern correcting code (EPCC) is incorporated in the design of a turbo equalizer (TE) with aim to correct dominant error events of the inter-symbol interference (ISI) channel at the output of its matching Viterbi detector. By targeting the low Hamming-weight interleaved errors of the outer convolutional code, which are responsible for low Euclidean-weight errors in the Viterbi trellis, the turbo equalizer with an error-pattern correcting code (TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the conventional non-precoded TE, especially for high rate applications. A maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise ratio (SNR) gain for various channel conditions and design parameters. In addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is compared to demonstrate the present TE's superiority for short interleaver lengths and high coding rates.
AN ERROR-RESILIENT H.263+ CODING SCHEME FOR VIDEO TRANSMISSION OVER WIRELESS NETWORKS
Institute of Scientific and Technical Information of China (English)
Li Jian; Bie Hongxia
2006-01-01
Video transmission over wireless networks has received much attention recently for its restricted bandwidth and high bit-error rate. Based on H.263+, by reversing part stream sequences of each Group Of Block (GOB), an error resilient scheme is presented to improve video robustness without additional bandwidth burden. Error patterns are employed to simulate Wideband Code Division Multiple Access(WCDMA) channels to check out error resilience performances. Simulation results show that both subjective and objective qualities of the reconstructed images are improved remarkably. The mean Peak Signal to Noise Ratio (PSNR)is increased by 0.5dB, and the highest increment is 2dB.
Computing Bits of Algebraic Numbers
Datta, Samir
2011-01-01
We initiate the complexity theoretic study of the problem of computing the bits of (real) algebraic numbers. This extends the work of Yap on computing the bits of transcendental numbers like \\pi, in Logspace. Our main result is that computing a bit of a fixed real algebraic number is in C=NC1\\subseteq Logspace when the bit position has a verbose (unary) representation and in the counting hierarchy when it has a succinct (binary) representation. Our tools are drawn from elementary analysis and numerical analysis, and include the Newton-Raphson method. The proof of our main result is entirely elementary, preferring to use the elementary Liouville's theorem over the much deeper Roth's theorem for algebraic numbers. We leave the possibility of proving non-trivial lower bounds for the problem of computing the bits of an algebraic number given the bit position in binary, as our main open question. In this direction we show very limited progress by proving a lower bound for rationals.
String bit models for superstring
Energy Technology Data Exchange (ETDEWEB)
Bergman, O.; Thorn, C.B.
1995-12-31
The authors extend the model of string as a polymer of string bits to the case of superstring. They mainly concentrate on type II-B superstring, with some discussion of the obstacles presented by not II-B superstring, together with possible strategies for surmounting them. As with previous work on bosonic string work within the light-cone gauge. The bit model possesses a good deal less symmetry than the continuous string theory. For one thing, the bit model is formulated as a Galilei invariant theory in (D {minus} 2) + 1 dimensional space-time. This means that Poincare invariance is reduced to the Galilei subgroup in D {minus} 2 space dimensions. Naturally the supersymmetry present in the bit model is likewise dramatically reduced. Continuous string can arise in the bit models with the formation of infinitely long polymers of string bits. Under the right circumstances (at the critical dimension) these polymers can behave as string moving in D dimensional space-time enjoying the full N = 2 Poincare supersymmetric dynamics of type II-B superstring.
Borot de Battisti, M; Denis de Senneville, B; Maenhout, M; Hautvast, G; Binnekamp, D; Lagendijk, J J W; van Vulpen, M; Moerland, M A
2016-03-01
The development of magnetic resonance (MR) guided high dose rate (HDR) brachytherapy for prostate cancer has gained increasing interest for delivering a high tumor dose safely in a single fraction. To support needle placement in the limited workspace inside the closed-bore MRI, a single-needle MR-compatible robot is currently under development at the University Medical Center Utrecht (UMCU). This robotic device taps the needle in a divergent way from a single rotation point into the prostate. With this setup, it is warranted to deliver the irradiation dose by successive insertions of the needle. Although robot-assisted needle placement is expected to be more accurate than manual template-guided insertion, needle positioning errors may occur and are likely to modify the pre-planned dose distribution.In this paper, we propose a dose plan adaptation strategy for HDR prostate brachytherapy with feedback on the needle position: a dose plan is made at the beginning of the interventional procedure and updated after each needle insertion in order to compensate for possible needle positioning errors. The introduced procedure can be used with the single needle MR-compatible robot developed at the UMCU. The proposed feedback strategy was tested by simulating complete HDR procedures with and without feedback on eight patients with different numbers of needle insertions (varying from 4 to 12). In of the cases tested, the number of clinically acceptable plans obtained at the end of the procedure was larger with feedback compared to the situation without feedback. Furthermore, the computation time of the feedback between each insertion was below 100 s which makes it eligible for intra-operative use.
Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E
2014-01-01
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and
Directory of Open Access Journals (Sweden)
Philip J Kellman
Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert
Hash Based Least Significant Bit Technique For Video Steganography
Directory of Open Access Journals (Sweden)
Prof. Dr. P. R. Deshmukh ,
2014-01-01
Full Text Available The Hash Based Least Significant Bit Technique For Video Steganography deals with hiding secret message or information within a video.Steganography is nothing but the covered writing it includes process that conceals information within other data and also conceals the fact that a secret message is being sent.Steganography is the art of secret communication or the science of invisible communication. In this paper a Hash based least significant bit technique for video steganography has been proposed whose main goal is to embed a secret information in a particular video file and then extract it using a stego key or password. In this Least Significant Bit insertion method is used for steganography so as to embed data in cover video with change in the lower bit.This LSB insertion is not visible.Data hidding is the process of embedding information in a video without changing its perceptual quality. The proposed method involve with two terms that are Peak Signal to Noise Ratio (PSNR and the Mean Square Error (MSE .This two terms measured between the original video files and steganographic video files from all video frames where a distortion is measured using PSNR. A hash function is used to select the particular position for insertion of bits of secret message in LSB bits.
Silicon chip based wavelength conversion of ultra-high repetition rate data signals
DEFF Research Database (Denmark)
Hu, Hao; Ji, Hua; Galili, Michael;
2011-01-01
We report on all-optical wavelength conversion of 160, 320 and 640 Gbit/s line-rate data signals using four-wave mixing in a 3.6 mm long silicon waveguide. Bit error rate measurements validate the performance within FEC limits.......We report on all-optical wavelength conversion of 160, 320 and 640 Gbit/s line-rate data signals using four-wave mixing in a 3.6 mm long silicon waveguide. Bit error rate measurements validate the performance within FEC limits....
Vo, Q. D.
1984-01-01
A program which was written to simulate Real Time Minimal-Byte-Error Probability (RTMBEP) decoding of full unit-memory (FUM) convolutional codes on a 3-bit quantized AWGN channel is described. This program was used to compute the symbol-error probability of FUM codes and to determine the signal to noise (SNR) required to achieve a bit error rate (BER) of 10 to the minus 6th power for corresponding concatenated systems. A (6,6/30) FUM code, 6-bit Reed-Solomon code combination was found to achieve the required BER at a SNR of 1.886 dB. The RTMBEP algorithm was then modified for decoding partial unit-memory (PUM) convolutional codes. A simulation program was also written to simulate the symbol-error probability of these codes.
Müller, Amanda
2015-01-01
This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…
Bakker, Marjan; Wicherts, Jelte M
2014-09-01
In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PMID:24773354
Zhu, Jin; Wang, Dayan; Xie, Wanqing
2015-02-20
Diversified wavefront deformation is an inevitable phenomenon in intersatellite optical communication systems, which will decrease system performance. In this paper, we investigate the description of wavefront deformation and its influence on the packet error rate (PER) of digital pulse interval modulation (DPIM). With the wavelet method, the diversified wavefront deformation can be described by wavelet parameters: coefficient, dilation, and shift factors, where the coefficient factor represents the depth, dilation factor represents the area, and shift factor is for location. Based on this, the relationship between PER and wavelet parameters is analyzed from a theoretical viewpoint. Numerical results illustrate the validity of theoretical analysis: PER increases with the depth and area and decreases if location gets farther from the center of the optical antenna. In addition to describing diversified deformation, the advantage of the wavelet method over Zernike polynomials in computational complexity is shown via numerical example. This work provides a feasible method for the description along with influence analysis of diversified wavefront deformation from a practical viewpoint and will be helpful for designing optical systems.
Chuanshi Brand Tri-cone Roller Bit
Institute of Scientific and Technical Information of China (English)
Chen Xilong; Shen Zhenzhong; Yuan Xiaoyi
1997-01-01
@@ Compared with other types of bits, the tri-cone roller bit has the advantages of excellent comprehensive performance, low price, wide usage range. It is free of formation limits. The tri-cone roller bit accounts for 90% of the total bits in use. The Chengdu Mechanical Works, as a major manufacturer of petroleum mechanical products and one of the four major tri-cone roller bit factories in China,has produced 120 types of bits in seven series and 19 sizes since 1967. The bits manufactured by the factory are not only sold to the domestic oilfields, but also exported to Japan, Thailand, Indonesia, the Philippines and the Middle East.
Directory of Open Access Journals (Sweden)
Ozlu Nagihan
2009-03-01
Full Text Available Abstract Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD, disk diffusion (DD, Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78% were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%. Vitek 2 produced eight minor errors(7.2%. BD Phoenix produced three major errors (2.8%. DD produced two very major errors (1.8% (slightly higher (0.3% than the acceptable limit and three major errors (2.7%. MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25% and 50 minor errors (44.6%. Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility
A 1.5 bit/s Pipelined Analog-to-Digital Converter Design with Independency of Capacitor Mismatch
Institute of Scientific and Technical Information of China (English)
LI Dan; RONG Men-tian; MAO Jun-fa
2007-01-01
A new technique which is named charge temporary storage technique (CTST) was presented to improve the linearity of a 1.5 bit/s pipelined analog-to-digital converter (ADC).The residual voltage was obtained from the sampling capacitor, and the other capacitor was just a temporary storage of charge.Then, the linearity produced by the mismatch of these capacitors was eliminated without adding extra capacitor error-averaging amplifiers.The simulation results confirmed the high linearity and low dissipation of pipelined ADCs implemented in CTST, so CTST was a new method to implement high resolution, small size ADCs.
Eswaran, Krishnan; Ramchandran, Kannan
2008-01-01
A fundamental problem in dynamic frequency reuse is that the cognitive radio is ignorant of the amount of interference it inflicts on the primary license holder. A model for such a situation is proposed and analyzed. The primary sends packets across an erasure channel and employs simple ACK/NAK feedback (ARQs) to retransmit erased packets. Furthermore, its erasure probabilities are influenced by the cognitive radio's activity. While the cognitive radio does not know these interference characteristics, it can eavesdrop on the primary's ARQs. The model leads to strategies in which the cognitive radio adaptively adjusts its input based on the primary's ARQs thereby guaranteeing the primary exceeds a target packet rate. A relatively simple strategy whereby the cognitive radio transmits only when the primary's empirical packet rate exceeds a threshold is shown to have interesting universal properties in the sense that for unknown time-varying interference characteristics, the primary is guaranteed to meet its targ...
The Application Wavelet Transform Algorithm in Testing ADC Effective Number of Bits
Directory of Open Access Journals (Sweden)
Emad A. Awada
2013-10-01
Full Text Available In evaluating Analog to Digital Convertors, many parameters are checked for performance and error rate.One of these parameters is the device Effective Number of Bits. In classical testing of Effective Number ofBits, testing is based on signal to noise components ratio (SNR, whose coefficients are driven viafrequency domain (Fourier Transform of ADC’s output signal. Such a technique is extremely sensitive tonoise and require large number of data samples. That is, longer and more complex testing process as thedevice under test increases in resolutions. Meanwhile, a new time – frequency domain approach (known asWavelet transform is proposed to measure and analyze Analog-to-Digital Converters parameter ofEffective Number of Bits with less complexity and fewer data samples.In this work, the algorithm of Wavelet transform was used to estimate worst case Effective Number of Bitsand compare the new testing results with classical testing methods. Such an algorithm, Wavelet transform,have shown DSP testing process improvement in terms of time and computations complexity based on itsspecial properties of multi-resolutions.
Burgess, Ralph; Yang, Ziheng
2008-09-01
Estimation of population parameters for the common ancestors of humans and the great apes is important in understanding our evolutionary history. In particular, inference of population size for the human-chimpanzee common ancestor may shed light on the process by which the 2 species separated and on whether the human population experienced a severe size reduction in its early evolutionary history. In this study, the Bayesian method of ancestral inference of Rannala and Yang (2003. Bayes estimation of species divergence times and ancestral population sizes using DNA sequences from multiple loci. Genetics. 164:1645-1656) was extended to accommodate variable mutation rates among loci and random species-specific sequencing errors. The model was applied to analyze a genome-wide data set of approximately 15,000 neutral loci (7.4 Mb) aligned for human, chimpanzee, gorilla, orangutan, and macaque. We obtained robust and precise estimates for effective population sizes along the hominoid lineage extending back approximately 30 Myr to the cercopithecoid divergence. The results showed that ancestral populations were 5-10 times larger than modern humans along the entire hominoid lineage. The estimates were robust to the priors used and to model assumptions about recombination. The unusually low X chromosome divergence between human and chimpanzee could not be explained by variation in the male mutation bias or by current models of hybridization and introgression. Instead, our parameter estimates were consistent with a simple instantaneous process for human-chimpanzee speciation but showed a major reduction in X chromosome effective population size peculiar to the human-chimpanzee common ancestor, possibly due to selective sweeps on the X prior to separation of the 2 species. PMID:18603620
Errors of measurement by laser goniometer
Agapov, Mikhail Y.; Bournashev, Milhail N.
2000-11-01
The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.
An analysis of the impact of data errors on backorder rates in the F404 engine system
Burson, Patrick A. R.
2003-01-01
Approved for public release; distribution in unlimited. In the management of the U.S. Naval inventory, data quality is of critical importance. Errors in major inventory databases contribute to increased operational costs, reduced revenue, and loss of confidence in the reliability of the supply system. Maintaining error-free databases is not a realistic objective. Data-quality efforts must be prioritized to ensure that limited resources are allocated to achieve the maximum benefit. Thi...
Yang, S. -R. Eric; Schliemann, John; MacDonald, A. H.
2002-01-01
Bilayer quantum Hall systems can form collective states in which electrons exhibit spontaneous interlayer phase coherence. We discuss the possibility of using bilayer quantum dot many-electron states with this property to create two-level systems that have potential advantages as quantum bits.
... Here's Help White House Lunch Recipes Hey! A Mosquito Bit Me! KidsHealth > For Kids > Hey! A Mosquito ... español ¡Ay! ¡Me picó un mosquito! What's a Mosquito? A mosquito (say: mus-KEE-toe) is an ...
... Snowboarding, Skating Crushes What's a Booger? Hey! A Tarantula Bit Me! KidsHealth > For Kids > Hey! A Tarantula ... español ¡Ay! ¡Me picó una tarántula! What's a Tarantula? A tarantula is a hairy spider that is ...
基于压缩感知的低速率语音编码新方案%New low bit rate speech coding scheme based on compressed sensing
Institute of Scientific and Technical Information of China (English)
叶蕾; 杨震; 孙林慧
2011-01-01
利用语音小波高频系数的稀疏性和压缩感知原理,提出一种新的基于压缩感知的低速率语音编码方案,其中小波高频系数的压缩感知重构分别采用l1范数优化方案及码本预测方案进行,前者对大幅度样值重构效果较好,且不仅适用于语音,也适用于音乐信号,具有传统的线性预测编码方法无法比拟的优势,后者对稀疏系数位置的估计较好,且不需要采用压缩感知重构常用的基追踪算法或匹配追踪算法,从而减少了计算量.两种方法的联合使用能发挥各自的优势,使得重构语音的音质进一步改善.%Utilizing the sparsity of high frequency wavelet transform coefficients of speech signal and theory of compressed sensing, a new low bit rate speech coding scheme based on compressed sensing is proposed. The reconstruction of high frequency wavelet transform coefficients is achieved by l1 normal optimization and codebook prediction reconstruction respectively. L1 reconstruction has good effect for large coefficients and suits for both speech and music, with which traditional linear prediction coding cannot compare. Codebook prediction reconstruction has good effect for the location of sparse coefficients and reduces the amount of calculation due to not using basis pursuit or matching pursuit. The combination of these two reconstruction methods can bring the advantages of both methods and improve the quality of the reconstructed speech.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Parity Bit Replenishment for JPEG 2000-Based Video Streaming
Directory of Open Access Journals (Sweden)
François-Olivier Devaux
2009-01-01
Full Text Available This paper envisions coding with side information to design a highly scalable video codec. To achieve fine-grained scalability in terms of resolution, quality, and spatial access as well as temporal access to individual frames, the JPEG 2000 coding algorithm has been considered as the reference algorithm to encode INTRA information, and coding with side information has been envisioned to refresh the blocks that change between two consecutive images of a video sequence. One advantage of coding with side information compared to conventional closed-loop hybrid video coding schemes lies in the fact that parity bits are designed to correct stochastic errors and not to encode deterministic prediction errors. This enables the codec to support some desynchronization between the encoder and the decoder, which is particularly helpful to adapt on the fly pre-encoded content to fluctuating network resources and/or user preferences in terms of regions of interest. Regarding the coding scheme itself, to preserve both quality scalability and compliance to the JPEG 2000 wavelet representation, a particular attention has been devoted to the definition of a practical coding framework able to exploit not only the temporal but also spatial correlation among wavelet subbands coefficients, while computing the parity bits on subsets of wavelet bit-planes. Simulations have shown that compared to pure INTRA-based conditional replenishment solutions, the addition of the parity bits option decreases the transmission cost in terms of bandwidth, while preserving access flexibility.
Parity Bit Replenishment for JPEG 2000-Based Video Streaming
Directory of Open Access Journals (Sweden)
Devaux François-Olivier
2009-01-01
Full Text Available Abstract This paper envisions coding with side information to design a highly scalable video codec. To achieve fine-grained scalability in terms of resolution, quality, and spatial access as well as temporal access to individual frames, the JPEG 2000 coding algorithm has been considered as the reference algorithm to encode INTRA information, and coding with side information has been envisioned to refresh the blocks that change between two consecutive images of a video sequence. One advantage of coding with side information compared to conventional closed-loop hybrid video coding schemes lies in the fact that parity bits are designed to correct stochastic errors and not to encode deterministic prediction errors. This enables the codec to support some desynchronization between the encoder and the decoder, which is particularly helpful to adapt on the fly pre-encoded content to fluctuating network resources and/or user preferences in terms of regions of interest. Regarding the coding scheme itself, to preserve both quality scalability and compliance to the JPEG 2000 wavelet representation, a particular attention has been devoted to the definition of a practical coding framework able to exploit not only the temporal but also spatial correlation among wavelet subbands coefficients, while computing the parity bits on subsets of wavelet bit-planes. Simulations have shown that compared to pure INTRA-based conditional replenishment solutions, the addition of the parity bits option decreases the transmission cost in terms of bandwidth, while preserving access flexibility.
GOP-Level Bit Allocation Using Reverse Dynamic Programming
Institute of Scientific and Technical Information of China (English)
LU Yang; XIE Jun; LI Hang; CUI Huijuan
2009-01-01
An efficient adaptive group of pictures (GOP)-Ievel bit allocation algorithm was developed based on reverse dynamic programming (RDP). The algorithm gives the initial delay and sequence distortion curve with just one iteration of the algorithm. A simple GOP-level rate and distortion model was then developed for two-level constant quality rate control. The initial delay values and the corresponding optimal GOP-level bit allocation scheme can be obtained for video streaming along with the proper initial delay for various distortion tolerance levels. Simulations show that the algorithm provides an efficient solution for delay and buffer con-strained GOP-level rate control for video streaming.
7-bit meta-transliterations for 8-bit romanizations
Lagally, Klaus
1997-01-01
We propose a general strategy for deriving 7-bit encodings for texts in languages which use an alphabetic non-Roman script, like Arabic, Persian, Sanskrit and many other Indic scripts, and for which there is some transliteration convention using Roman letters with additional diacritical marks. These schemes, which we will call 'meta-transliterations', are based on using single ASCII letters for representing Roman letters, and digraphs consisting of a suitable punctuation character and an ASCI...
Beamforming under Quantization Errors in Wireless Binaural Hearing Aids
Directory of Open Access Journals (Sweden)
Kees Janse
2008-09-01
Full Text Available Improving the intelligibility of speech in different environments is one of the main objectives of hearing aid signal processing algorithms. Hearing aids typically employ beamforming techniques using multiple microphones for this task. In this paper, we discuss a binaural beamforming scheme that uses signals from the hearing aids worn on both the left and right ears. Specifically, we analyze the effect of a low bit rate wireless communication link between the left and right hearing aids on the performance of the beamformer. The scheme is comprised of a generalized sidelobe canceller (GSC that has two inputs: observations from one ear, and quantized observations from the other ear, and whose output is an estimate of the desired signal. We analyze the performance of this scheme in the presence of a localized interferer as a function of the communication bit rate using the resultant mean-squared error as the signal distortion measure.
Beamforming under Quantization Errors in Wireless Binaural Hearing Aids
Directory of Open Access Journals (Sweden)
Srinivasan Sriram
2008-01-01
Full Text Available Improving the intelligibility of speech in different environments is one of the main objectives of hearing aid signal processing algorithms. Hearing aids typically employ beamforming techniques using multiple microphones for this task. In this paper, we discuss a binaural beamforming scheme that uses signals from the hearing aids worn on both the left and right ears. Specifically, we analyze the effect of a low bit rate wireless communication link between the left and right hearing aids on the performance of the beamformer. The scheme is comprised of a generalized sidelobe canceller (GSC that has two inputs: observations from one ear, and quantized observations from the other ear, and whose output is an estimate of the desired signal. We analyze the performance of this scheme in the presence of a localized interferer as a function of the communication bit rate using the resultant mean-squared error as the signal distortion measure.
Performance Analysis of MC-CDMA in the Presence of Carriers Phase Errors
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
This paper presents the effect of carriers phase error on MC-CDMA performance in downlink mobile communications. Signal-to-Noise Ratio (SNR) and Bit-Error-Rate (BER) are analyzed taking into account the effect of carrier phase errors. It is shown that the MC-CDMA system is very sensitive to a carrier frequency offset, the system performance rapidly degrades and strongly depends on the number of carriers. For a maximal load, the degradation caused by carrier phase jitter is independent of the number of the carriers.
Tao Lyu; Suying Yao; Kaiming Nie; Jiangtao Xu
2014-01-01
A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the s...
Directory of Open Access Journals (Sweden)
Sharmila Vaz
Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.
A Holistic Approach to Bit Preservation
DEFF Research Database (Denmark)
Zierau, Eld Maj-Britt Olmütz
2011-01-01
This thesis presents three main results for a holistic approach to bit preservation, where the ultimate goal is to find the optimal bit preservation strategy for specific digital material that must be digitally preserved. Digital material consists of sequences of bits, where a bit is a binary digit...... preservation strategy. This can be aspects of how the permanent access to the digital material must be ensured. It can also be aspects of how the material must be treated as part of using it. This includes aspects related to how the digital material to be bit preserved is represented, as well as requirements...... for confidentiality, availability, costs, additional to the requirements of ensuring bit safety. A few examples are: • The way that digital material is represented in files and structures has an influence on whether it is possible to interpret and use the bits at a later stage. Consequentially, the way bits represent...
Flexible Bit Preservation on a National Basis
DEFF Research Database (Denmark)
Jurik, Bolette; Nielsen, Anders Bo; Zierau, Eld
2012-01-01
In this paper we present the results from The Danish National Bit Repository project. The project aim was establishment of a system that can offer flexible and sustainable bit preservation solutions to Danish cultural heritage institutions. Here the bit preservation solutions must include support...... of bit safety as well as other requirements like e.g. confidentiality and availability. The Danish National Bit Repository is motivated by the need to investigate and handle bit preservation for digital cultural heritage. Digital preservation relies on the integrity of the bits which digital material...... consists of, and it is with this focus that the project was initiated. This paper summarizes the requirements for a general system to offer bit preservation to cultural heritage institutions. On this basis the paper describes the resulting flexible system which can support such requirements. The paper...
A holistic approach to bit preservation
DEFF Research Database (Denmark)
Zierau, Eld
2012-01-01
Purpose: The purpose of this paper is to point out the importance of taking a holistic approach to bit preservation when setting out to find an optimal bit preservation solution for specific digital materials. In the last decade there has been an increasing awareness that bit preservation, which...... is to keep bits intact and readable, is far more complex than first anticipated, even in this narrow definition. This paper takes a more holistic approach to bit preservation, and looks at how an optimal bit preservation strategy can be found, when requirements like confidentiality, availability and costs...... are taken into account. Design/methodology/approach: The paper describes the various findings from previous research which have led to the holistic approach to bit preservation. This paper also includes an introduction to digital preservation with a focus on the role of bit preservation, which sets...
Li, Ping
2009-01-01
This paper establishes the theoretical framework of b-bit minwise hashing. The original minwise hashing method has become a standard technique for estimating set similarity (e.g., resemblance) with applications in information retrieval, data management, social networks and computational advertising. By only storing the lowest $b$ bits of each (minwise) hashed value (e.g., b=1 or 2), one can gain substantial advantages in terms of computational efficiency and storage space. We prove the basic theoretical results and provide an unbiased estimator of the resemblance for any b. We demonstrate that, even in the least favorable scenario, using b=1 may reduce the storage space at least by a factor of 21.3 (or 10.7) compared to using b=64 (or b=32), if one is interested in resemblance > 0.5.
Zhou, Qing F.; Mow, Wai Ho; Zhang, Shengli; Toumpakaris, Dimitris
2012-01-01
Motivated by applications such as battery-operated wireless sensor networks (WSN), we propose an easy-to-implement energy-efficient two-way relaying scheme. In particular, we address the challenge of improving the standard two-way selective decode-and-forward protocol (TW-SDF) in terms of block-error-rate (BLER) with minor additional complexity and energy consumption. By following the principle of soft relaying, our solution is the two-way one-bit soft forwarding (TW-1bSF) protocol in which t...
McLaughlin, Douglas B
2012-01-01
The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors.
A brief review on quantum bit commitment
Almeida, Álvaro J.; Loura, Ricardo; Paunković, Nikola; Silva, Nuno A.; Muga, Nelson J.; Mateus, Paulo; André, Paulo S.; Pinto, Armando N.
2014-08-01
In classical cryptography, the bit commitment scheme is one of the most important primitives. We review the state of the art of bit commitment protocols, emphasizing its main achievements and applications. Next, we present a practical quantum bit commitment scheme, whose security relies on current technological limitations, such as the lack of long-term stable quantum memories. We demonstrate the feasibility of our practical quantum bit commitment protocol and that it can be securely implemented with nowadays technology.
Perceptual importance analysis for H.264/AVC bit allocation
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
The existing H.264/AVC rate control schemes rarely include the perceptual considerations. As a result, the improvements in visual quality are hardly comparable to those in peak signal-to-noise ratio (PSNR). In this paper, we propose a perceptual importance analysis scheme to accurately abstract the spatial and temporal perceptual characteristics of video contents. Then we perform bit allocation at macroblock (MB) level by adopting a perceptual mode decision scheme, which adaptively updates the Lagrangian multiplier for mode decision according to the perceptual importance of each MB. Simulation results show that the proposed scheme can efficiently reduce bit rates without visual quality degradation.
Development of a jet-assisted polycrystalline diamond drill bit
Energy Technology Data Exchange (ETDEWEB)
Pixton, D.S.; Hall, D.R.; Summers, D.A.; Gertsch, R.E.
1997-12-31
A preliminary investigation has been conducted to evaluate the technical feasibility and potential economic benefits of a new type of drill bit. This bit transmits both rotary and percussive drilling forces to the rock face, and augments this cutting action with high-pressure mud jets. Both the percussive drilling forces and the mud jets are generated down-hole by a mud-actuated hammer. Initial laboratory studies show that rate of penetration increases on the order of a factor of two over unaugmented rotary and/or percussive drilling rates are possible with jet-assistance.
Dispersion Tolerance of 40 Gbaud Multilevel Modulation Formats with up to 3 bits per Symbol
DEFF Research Database (Denmark)
Jensen, Jesper Bevensee; Tokle, Torger; Geng, Yan;
2006-01-01
We present numerical and experimental investigations of dispersion tolerance for multilevel phase- and amplitude modulation with up to 3 bits per symbol at a symbol rate of 40 Gbaud......We present numerical and experimental investigations of dispersion tolerance for multilevel phase- and amplitude modulation with up to 3 bits per symbol at a symbol rate of 40 Gbaud...
Institute of Scientific and Technical Information of China (English)
王立夫; 孙凤娟
2012-01-01
介绍了一种融短波通信与电离层斜向探测于一体的联合试验平台，该平台信道探测与通信同时进行，共用一套硬件设备，克服了设备不匹配及探测信道参量失效等问题，并基于该平台实录数据提取了通信误码率及信道特征参量包括信噪比、衰落深度、衰落率、多径扩展、各模式信号幅度、群距离、主模式相位、多普勒频移及多普勒扩展等，统计分析了各信道参量对通信误码率的影响，得出了一些有意义的结论。%A test platform combined HF communication with ionospheric oblique sounding is introduced, with which the ionosphere channel sounding and communi- cation is carried out synchronously using the same hardware equipment. By this way, the problem of equipment mismatch and no real-time channel parameters could be solved. Based on the experimental data measured by this plat{orm the communication bit error ratio（BER） and the channel characteristic parameters, in- cluding signal to noise ratio （SNR）, fading depth, fading rate, mulitipath spread, signal strength, group distance, the phase of major-mode, Doppler shift and Doppler spread, are extracted. The impact of the channel characteristic parameters on the communication BER is statistically analyzed. Significant conclusions are pro- posed in the end of this paper.
Directory of Open Access Journals (Sweden)
P.Rajeepriyanka
2014-08-01
Full Text Available A UART (Universal Asynchronous Receiver and Transmitter is a device allowing the reception and transmission of information, in a serial and asynchronous way. This project focuses on the implementation of UART with status register using multi bit flip-flop and comparing it with UART with status register using single bit flip-flops. During the reception of data, status register indicates parity error, framing error, overrun error and break error. The multi bit flip-flop is indicated in this status register. In modern very large scale integrated circuits, Power reduction and area reduction has become a vital design goal for sophisticated design applications. So in this project the power consumed and area occupied by both multi-bit flip-flop and single bit flip is compared. The underlying idea behind multi-bit flip-flop method is to eliminate total inverter number by sharing the inverters in the flip-flops. Based on the elimination feature of redundant inverters in merging single bit flip-flops into multi bit flip-flops, gives reduction of wired length and this result in reduction of power consumption and area.
EXACT ERROR PROBABILITY OF ORTHOGONAL SPACE-TIME BLOCK CODES OVER FLAT FADING CHANNELS
Institute of Scientific and Technical Information of China (English)
Xu Feng; Yue Dianwu
2007-01-01
Space time block coding is a modulation scheme recently discovered for the transmit antenna diversity to combat the effects of wireless fading channels. Using the equivalent Single-Input Single-Output (SISO) model, this paper presents closed-form expressions for the exact Symbol Error Rate (SER) and Bit Error Rate (BER) of Orthogonal Space-Time Block Codes (OSTBCs) with M-ary Phase-Shift Keying (MPSK) and M-ary Quadrature Amplitude Modulation (MQAM) over flat uncorrelated Nakagami-m and Ricean fading channels.
A new diamond bit for extra-hard, compact and nonabrasive rock formation
Institute of Scientific and Technical Information of China (English)
王佳亮; 张绍和
2015-01-01
A new impregnated diamond bit was designed to solve the slipping problem when impregnated diamond bit was used for extra-hard, compact, and nonabrasive rock formation. Adding SiC grits into matrix, SiC grits can easily be exfoliated from the surface of the matrix due to weak holding-force with matrix, which made the surface non-smooth. ThreeФ36/24 mm laboratorial bits were manufactured to conduct a laboratory drilling test on zirconiacorundum refractory brick. The laboratory drilling test indicates that the abrasive resistance of the bit work layer is proportional to the SiC concentation. The higher the concentration, the weaker the abrasive resistance of matrix. The new impregnated diamond bit was applied to a mining area drilling construction in Jiangxi province, China. Field drilling application indicates that the ROP (rate of penetration) of the new bit is approximately two to three times that of the common bits. Compared with the common bits, the surface of the new bit has typical abrasive wear characteristics, and the metabolic rate of the diamond can be well matched to the wear rate of the matrix.
Pightling, Arthur W.; Nicholas Petronella; Franco Pagotto
2014-01-01
The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance...
Soft Error Vulnerability of Iterative Linear Algebra Methods
Energy Technology Data Exchange (ETDEWEB)
Bronevetsky, G; de Supinski, B
2007-12-15
Devices become increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft errors primarily caused problems for space and high-atmospheric computing applications. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming significant even at terrestrial altitudes. The soft error vulnerability of iterative linear algebra methods, which many scientific applications use, is a critical aspect of the overall application vulnerability. These methods are often considered invulnerable to many soft errors because they converge from an imprecise solution to a precise one. However, we show that iterative methods can be vulnerable to soft errors, with a high rate of silent data corruptions. We quantify this vulnerability, with algorithms generating up to 8.5% erroneous results when subjected to a single bit-flip. Further, we show that detecting soft errors in an iterative method depends on its detailed convergence properties and requires more complex mechanisms than simply checking the residual. Finally, we explore inexpensive techniques to tolerate soft errors in these methods.
An Empirical Analysis of Requantization Errors for Recompressed JPEG Images
Directory of Open Access Journals (Sweden)
B.VINOTH KUMAR
2011-12-01
Full Text Available Images from sources like digital camera, internet and the like are in the JPEG format. There is a tremendous need for recompression of JPEG images in order to satisfy the space constraints and to transmit the images with limited bandwidth. Several techniques have been developed for recompressing the JPEG image in order to achieve low bit rate and to have good visual quality. In this paper, we concentrated on requantization method to achieve recompression. We have analyzed the occurrence of requantization errors empirically for Normal rounding technique. Based on our experience, we have proposed the Enhanced rounding technique for requantization of JPEG images. The resulting images are generally smaller in size and have improved perceptualimage quality over Normal rounding technique. We have compared the recompression results for standard benchmark 256x256 gray scale images against image quality measures such as image size, compression ratio,bits per pixel and Peak Signal to Noise Ratio (PSNR.
Generalized Punctured Convolutional Codes with Unequal Error Protection
Directory of Open Access Journals (Sweden)
Marcelo Eduardo Pellenz
2009-01-01
Full Text Available We conduct a code search restricted to the recently introduced class of generalized punctured convolutional codes (GPCCs to find good unequal error protection (UEP convolutional codes for a prescribed minimal trellis complexity. The trellis complexity is taken to be the number of symbols per information bit in the Ã¢Â€ÂœminimalÃ¢Â€Â trellis module for the code. The GPCC class has been shown to possess codes with good distance properties under this decoding complexity measure. New good UEP convolutional codes and their respective effective free distances are tabulated for a variety of code rates and Ã¢Â€ÂœminimalÃ¢Â€Â trellis complexities. These codes can be used in several applications that require different levels of protection for their bits, such as the hierarchical digital transmission of video or images.
Influence of Implementation on the Properties of Pseudorandom Number Generators with a Carry Bit
Vattulainen, I; Saarinen, J J; Ala-Nissilä, T
1993-01-01
We present results of extensive statistical and bit level tests on three implementations of a pseudorandom number generator algorithm using the lagged Fibonacci method with an occasional addition of an extra bit. First implementation is the RCARRY generator of James, which uses subtraction. The second is a modified version of it, where a suggested error present in the original implementation has been corrected. The third is our modification of RCARRY such that it utilizes addition of the carry bit. Our results show that there are no significant differences between the performance of these three generators.
Fully photonics-based physical random bit generator.
Li, Pu; Sun, Yuanyuan; Liu, Xianglian; Yi, Xiaogang; Zhang, Jianguo; Guo, Xiaomin; Guo, Yanqiang; Wang, Yuncai
2016-07-15
We propose a fully photonics-based approach for ultrafast physical random bit generation. This approach exploits a compact nonlinear loop mirror (called a terahertz optical asymmetric demultiplexer, TOAD) to sample the chaotic optical waveform in an all-optical domain and then generate random bit streams through further comparison with a threshold level. This method can efficiently overcome the electronic jitter bottleneck confronted by existing RBGs in practice. A proof-of-concept experiment demonstrates that this method can continuously extract 5 Gb/s random bit streams from the chaotic output of a distributed feedback laser diode (DFB-LD) with optical feedback. This limited generation rate is caused by the bandwidth of the used optical chaos. PMID:27420532
Error Rates of M-PAM and M-QAM in Generalized Fading and Generalized Gaussian Noise Environments
Soury, Hamza
2013-07-01
This letter investigates the average symbol error probability (ASEP) of pulse amplitude modulation and quadrature amplitude modulation coherent signaling over flat fading channels subject to additive white generalized Gaussian noise. The new ASEP results are derived in a generic closed-form in terms of the Fox H function and the bivariate Fox H function for the extended generalized-K fading case. The utility of this new general closed-form is that it includes some special fading distributions, like the Generalized-K, Nakagami-m, and Rayleigh fading and special noise distributions such as Gaussian and Laplacian. Some of these special cases are also treated and are shown to yield simplified results.
Reconfigurable random bit storage using polymer-dispersed liquid crystal
Horstmeyer, Roarke; Yang, Changhuei
2014-01-01
We present an optical method of storing random cryptographic keys, at high densities, within an electronically reconfigurable volume of polymer-dispersed liquid crystal (PDLC) film. We demonstrate how temporary application of a voltage above PDLC's saturation threshold can completely randomize (i.e., decorrelate) its optical scattering potential in less than a second. A unique optical setup is built around this resettable PDLC film to non-electronically save many random cryptographic bits, with minimal error, over a period of one day. These random bits, stored at an unprecedented density (10 Gb per cubic millimeter), can then be erased and transformed into a new random key space in less than one second. Cryptographic applications of such a volumetric memory device include use as a crypto-currency wallet and as a source of resettable "fingerprints" for time-sensitive authentication.
Bit and Power Loading Approach for Broadband Multi-Antenna OFDM System
DEFF Research Database (Denmark)
Rahman, Muhammad Imadur; Das, Suvra S.; Wang, Yuanye;
2007-01-01
In this work, we have studied bit and power allocation strategies for multi-antenna assisted Orthogonal Frequency Division Multiplexing (OFDM) systems and investigated the impact of different rates of bit and power allocations on various multi-antenna diversity schemes. It is observed that, if we...
Bit-Based Joint Source-Channel Decoding of Huffman Encoded Markov Multiple Sources
Directory of Open Access Journals (Sweden)
Weiwei Xiang
2010-04-01
Full Text Available Multimedia transmission over time-varying channels such as wireless channels has recently motivated the research on the joint source-channel technique. In this paper, we present a method for joint source-channel soft decision decoding of Huffman encoded multiple sources. By exploiting the a priori bit probabilities in multiple sources, the decoding performance is greatly improved. Compared with the single source decoding scheme addressed by Marion Jeanne, the proposed technique is more practical in wideband wireless communications. Simulation results show our new method obtains substantial improvements with a minor increasing of complexity. For two sources, the gain in SNR is around 1.5dB by using convolutional codes when symbol-error rate (SER reaches 10-2 and around 2dB by using Turbo codes.
A Novel Error Correcting System Based on Product Codes for Future Magnetic Recording Channels
Van, Vo Tam
2012-01-01
We propose a novel construction of product codes for high-density magnetic recording based on binary low-density parity check (LDPC) codes and binary image of Reed Solomon (RS) codes. Moreover, two novel algorithms are proposed to decode the codes in the presence of both AWGN errors and scattered hard errors (SHEs). Simulation results show that at a bit error rate (bER) of approximately 10^-8, our method allows improving the error performance by approximately 1.9dB compared with that of a hard decision decoder of RS codes of the same length and code rate. For the mixed error channel including random noises and SHEs, the signal-to-noise ratio (SNR) is set at 5dB and 150 to 400 SHEs are randomly generated. The bit error performance of the proposed product code shows a significant improvement over that of equivalent random LDPC codes or serial concatenation of LDPC and RS codes.
Error Probability for Convolutional Codes with Weight Enumerating Function
Institute of Scientific and Technical Information of China (English)
Tianyi Zhang; Yongtao Liu
2013-01-01
The paper introduces the state reduction algorithm and accelerated state reduction algorithm are used to compute the distance weight enumerator (transfer function) T[x,y] of convolutional codes.Next use computer simulation to compare upper bound on the bit error probability on an additive white Gaussian noise (AWGN) for maximum free distance (MFD) codes of previously found and optimum distance spectrum (ODS) codes with rate 1/4,overall constraint length are 5 and 7,respectively. Finally,a method of how to search for good convolutional codes is given.
DEFF Research Database (Denmark)
Sabra, Jakob Borrits
We mourn our dead, publicly and privately, online and offline. Cemeteries, web memorials and social network sites make up parts of todays intricately weaved and interrelated network of death, grief and memorialization practices [1]–[5]. Whether cut in stone or made of bits, graves, cemeteries......, memorials, monuments, websites and social networking services (SNS) all are alterable, controllable and adaptive. They represent a certain rationale contrary to the emotive state of mourning (e.g. gravesites function as both spaces of internment and places of spiritual and emotional recollection). Following...... the divide between ‘states of rationale’ and ‘states of sentiment’ and augment the loop of exchanges between the two. We switch interdependently between these states by a seemingly coincidental structure, when subjected to involuntary memories or episodic reminders afforded by trigger parameters...
Energy Technology Data Exchange (ETDEWEB)
Wu, Kesheng
2007-08-02
An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. The compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.
Understanding BitTorrent Through Real Measurements
Mazurczyk, Wojciech; Kopiczko, Pawel
2011-01-01
In this paper the results of the BitTorrent measurement study are presented. Two sources of BitTorrent data were utilized: meta-data files that describe the content of resources shared by BitTorrent users and the logs of one of the currently most popular BitTorrent clients - {\\mu}Torrent. {\\mu}Torrent is founded upon a rather newly released UDP-based {\\mu}TP protocol that is claimed to be more efficient than TCP-based clients. Experimental data have been collected for fifteen days from the po...
Energy Technology Data Exchange (ETDEWEB)
Beddo, M.E.; Spinka, H.; Underwood, D.G.
1992-08-14
Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.
Directory of Open Access Journals (Sweden)
Rashid A. Fayadh
2014-01-01
Full Text Available When receiving high data rate in ultra-wideband (UWB technology, many users have experienced multiple-user interference and intersymbol interference in the multipath reception technique. Structures have been proposed for implementing rake receivers to enhance their capabilities by reducing the bit error probability (Pe, thereby providing better performances by indoor and outdoor multipath receivers. As a result, several rake structures have been proposed in the past to reduce the number of resolvable paths that must be estimated and combined. To achieve this aim, we suggest two maximal ratio combiners based on the pulse sign separation technique, such as the pulse sign separation selective combiner (PSS-SC and the pulse sign separation partial combiner (PSS-PC to reduce complexity with fewer fingers and to improve the system performance. In the combiners, a comparator was added to compare the positive quantity of positive pulses and negative quantities of negative pulses to decide whether the transmitted bit was 1 or 0. The Pe was driven by simulation for multipath environments for impulse radio time-hopping binary phase shift keying (TH-BPSK modulation, and the results were compared with those of conventional selective combiners (C-SCs and conventional partial combiners (C-PCs.
International Nuclear Information System (INIS)
A measure of the reliability of a transmission protocol is the likelihood that undetected errors in the transmitted data will occur. The author considers the effect of single bit errors on the error-detection mechanisms in the HDLC as defined in ISO Standard 3309. It is shown that the HDLC block synchronisation method is relatively vulnerable to the generation of undetected errors. Simple but effective methods of improvement within standard HDLC are to use fixed-length data bytes (e.g. of 8 bits), to give block length as part of the data, and to use a separate flag at the beginning and end of every block. (G.F.F.)
Directory of Open Access Journals (Sweden)
G. Vaikundam
2015-04-01
Full Text Available Beamforming is a signal processing technique to focus the transmitted energy so that maximum energy is radiated in the intended destination and communication range is enhanced. Data rate improvement in Transmit beamforming can be achieved with adaptive modulation. Though modulation adaptation is possible under zero-mean phase error, it is difficult to adapt it under non-zero mean Gaussian distributed phase error conditions. Phase errors occur due to channel estimation inaccuracies, delay in estimation, sensor drift, quantized feedback etc resulting in increased outage probability and Bit error rate. Preprocessing of beamforming weights adjusted by Sample Mean Estimate (SME solves the problem of adaptive modulation. However, under large phase error variation, the SME method fails. Hence, in this paper, Population Mean Estimate (PME approach is proposed to resolve these drawbacks for a Rayleigh flat fading channel with White Gaussian Noise. To correct the population mean error if any, Least Mean Square correction algorithm is proposed and is tested up to 80% error in PME and the corrected error fall within 10% error. Simulation results for a distributed beamforming sensor array indicate that the proposed method performs better than the SME based existing methods under worst-case phase error distribution.
Directory of Open Access Journals (Sweden)
Juan Mario Torres Nova
2010-05-01
Full Text Available Gaussian minimum shift keying (GMSK and differential binary phase shift keying (DBPSK are two digital modulation schemes which are -frequently used in radio communication systems; however, there is interdependence in the use of its benefits (spectral efficiency, low bit error rate, low inter symbol interference, etc. Optimising one parameter creates problems for another; for example, the GMSK scheme succeeds in reducing bandwidth when introducing a Gaussian filter into an MSK (minimum shift ke-ying modulator in exchange for increasing inter-symbol interference in the system. The DBPSK scheme leads to lower error pro-bability, occupying more bandwidth; it likewise facilitates synchronous data transmission due to the receiver’s bit delay when re-covering a signal.
LENUS (Irish Health Repository)
Chadwick, Liam
2012-03-12
Health Care Failure Modes and Effects Analysis (HFMEA®) is an established tool for risk assessment in health care. A number of deficiencies have been identified in the method. A new method called Systems and Error Analysis Bundle for Health Care (SEABH) was developed to address these deficiencies. SEABH has been applied to a number of medical processes as part of its validation and testing. One of these, Low Dose Rate (LDR) prostate Brachytherapy is reported in this paper. The case study supported the validity of SEABH with respect to its capacity to address the weaknesses of (HFMEA®).
Directory of Open Access Journals (Sweden)
Arthur W Pightling
Full Text Available The wide availability of whole-genome sequencing (WGS and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i depth of sequencing coverage, ii choice of reference-guided short-read sequence assembler, iii choice of reference genome, and iv whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT, using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming. We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers
Pightling, Arthur W; Petronella, Nicholas; Pagotto, Franco
2014-01-01
The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should
Factorization of a 512-bit RSA modulus
Cavallar, S.H.; Lioen, W.M.; Riele, H.J.J. te; Dodson, B.; Lenstra, A.K.; Montgomery, P.L.; Murphy, B.
2000-01-01
On August 22, 1999, we completed the factorization of the 512--bit 155--digit number RSA--155 with the help of the Number Field Sieve factoring method (NFS). This is a new record for factoring general numbers. Moreover, 512--bit RSA keys are frequently used for the protection of electronic commerce-
Koide, Daiichi; Yanagisawa, Hitoshi; Tokumaru, Haruki; Nakamura, Shoichi; Ohishi, Kiyoshi; Inomata, Koichi; Miyazaki, Toshimasa
2004-07-01
We describe the effectiveness of feed-forward control using the zero phase error tracking method (ZPET-FF control) of the tracking servo for high-data-transfer-rate optical disk drives, as we are developing an optical disk system to replace the conventional professional videotape recorder for recording high-definition television signals for news gathering or producing broadcast contents. The optical disk system requires a high-data-transfer-rate of more than 200 Mbps and large recording capacity. Therefore, fast and precise track-following control is indispensable. Here, we compare the characteristics of ZPET-FF control with those of conventional feedback control or repetitive control. Experimental results show that ZPET-FF control is more precise than feedback control, and the residual tracking error level is achieved with a tolerance of 10 nm at a linear velocity of 26 m/s in the experimental setup using a blue-violet laser optical head and high-density media. The feasibility of achieving precise ZPET-FF control at 15000 rpm is also presented.
Steganography forensics method for detecting least significant bit replacement attack
Wang, Xiaofeng; Wei, Chengcheng; Han, Xiao
2015-01-01
We present an image forensics method to detect least significant bit replacement steganography attack. The proposed method provides fine-grained forensics features by using the hierarchical structure that combines pixels correlation and bit-planes correlation. This is achieved via bit-plane decomposition and difference matrices between the least significant bit-plane and each one of the others. Generated forensics features provide the susceptibility (changeability) that will be drastically altered when the cover image is embedded with data to form a stego image. We developed a statistical model based on the forensics features and used least square support vector machine as a classifier to distinguish stego images from cover images. Experimental results show that the proposed method provides the following advantages. (1) The detection rate is noticeably higher than that of some existing methods. (2) It has the expected stability. (3) It is robust for content-preserving manipulations, such as JPEG compression, adding noise, filtering, etc. (4) The proposed method provides satisfactory generalization capability.
PERBANDINGAN APLIKASI MENGGUNAKAN METODE CAMELLIA 128 BIT KEY DAN 256 BIT KEY
Directory of Open Access Journals (Sweden)
Lanny Sutanto
2014-01-01
Full Text Available The rapid development of the Internet today to easily exchange data. This leads to high levels of risk in the data piracy. One of the ways to secure data is using cryptography camellia. Camellia is known as a method that has the encryption and decryption time is fast. Camellia method has three kinds of scale key is 128 bit, 192 bit, and 256 bit.This application is created using the C++ programming language and using visual studio 2010 GUI. This research compare the smallest and largest key size used on the file extension .Txt, .Doc, .Docx, .Jpg, .Mp4, .Mkv and .Flv. This application is made to comparing time and level of security in the use of 128-bit key and 256 bits. The comparison is done by comparing the results of the security value of avalanche effect 128 bit key and 256 bit key.
Energy Technology Data Exchange (ETDEWEB)
Noyes, H.P.
1990-01-29
We construct discrete space-time coordinates separated by the Lorentz-invariant intervals h/mc in space and h/mc{sup 2} in time using discrimination (XOR) between pairs of independently generated bit-strings; we prove that if this space is homogeneous and isotropic, it can have only 1, 2 or 3 spacial dimensions once we have related time to a global ordering operator. On this space we construct exact combinatorial expressions for free particle wave functions taking proper account of the interference between indistinguishable alternative paths created by the construction. Because the end-points of the paths are fixed, they specify completed processes; our wave functions are born collapsed''. A convenient way to represent this model is in terms of complex amplitudes whose squares give the probability for a particular set of observable processes to be completed. For distances much greater than h/mc and times much greater than h/mc{sup 2} our wave functions can be approximated by solutions of the free particle Dirac and Klein-Gordon equations. Using a eight-counter paradigm we relate this construction to scattering experiments involving four distinguishable particles, and indicate how this can be used to calculate electromagnetic and weak scattering processes. We derive a non-perturbative formula relating relativistic bound and resonant state energies to mass ratios and coupling constants, equivalent to our earlier derivation of the Bohr relativistic formula for hydrogen. Using the Fermi-Yang model of the pion as a relativistic bound state containing a nucleon-antinucleon pair, we find that (G{sub {pi}N}{sup 2}){sup 2} = (2m{sub N}/m{sub {pi}}){sup 2} {minus} 1. 21 refs., 1 fig.
Bit-coded regular expression parsing
DEFF Research Database (Denmark)
Nielsen, Lasse; Henglein, Fritz
2011-01-01
the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...... directly. We implement both solutions as well as a backtracking parser and perform benchmark experiments to gauge their practical performance. We observe that our DFA-based solution can be significantly more time and space efficient than the Frisch-Cardelli algorithm due to its sharing of DFA- nodes...
BitTorrent Request Message Models
Erman, David; Popescu, Adrian
2005-01-01
BitTorrent, a replicating Peer-to-Peer (P2P) file sharing system, has become extremely popular over the last years. According to Cachelogic, the BitTorrent traffic volume has increased from 26% to 52% of the total P2P traffic volume during the first half of 2004. This paper reports on new results obtained on modelling and analysis of BitTorrent traffic collected at Blekinge Institute of Technology (BTH) as well as a local Internet Service Provider (ISP). In particular, we report on new reques...
Reinforcement Learning in BitTorrent Systems
Izhak-Ratzin, Rafit; Park, Hyunggon; van der Schaar, Mihaela
2010-01-01
Recent research efforts have shown that the popular BitTorrent protocol does not provide fair resource reciprocation and may allow free-riding. In this paper, we propose a BitTorrent-like protocol that replaces the peer selection mechanisms in the regular BitTorrent protocol with a novel reinforcement learning (RL) based mechanism. Due to the inherent opration of P2P systems, which involves repeated interactions among peers over a long period of time, the peers can efficiently identify free-r...
Adaptive Subcarrier and Bit Allocation for Downlink OFDMA System with Proportional Fairness
Directory of Open Access Journals (Sweden)
Sudhir B. Lande
2011-11-01
Full Text Available This paper investigates the adaptive subcarrier and bit allocation algorithm for OFDMA systems. To minimize overall transmitted power, we propose a novel adaptive subcarrier and bit allocation algorithm based on channel state information (CSI and quality state information (QSI. A suboptimal approach that separately performs subcarrier allocation and bit loading is proposed. It is shown that a near optimal solution is obtained by the proposed algorithm which has low complexity compared to that of other conventional algorithm. We will study the problem of finding an optimal sub-carrier and power allocation strategy for downlink communication to multiple users in an OFDMA based wireless system.Assuming knowledge of the instantaneous channel gains for all users, we propose a multiuser OFDMA subcarrier, and bit allocation algorithm to minimize the total transmit power. This is done by assigning each user a set of subcarriers and by determining the number of bits and the transmit power level for each subcarrier. The objective is to minimize the total transmitted power over the entire network to satisfy the application layer and physical layer. We formulate this problem as a constrained optimization problem and present centralized algorithms. The simulation results will show that our approach results in an efficient assignment of subcarriers and transmitter power levels in terms of the energy required for transmitting each bit of information, to address this need, we also present a bit loading algorithm for allocating subcarriers and bits in order to satisfy the rate requirements of the links.
Krone, Stefan; Fettweis, Gerhard
2013-01-01
1-bit analog-to-digital conversion is very attractive for low-complexity communications receivers. A major drawback is, however, the small spectral efficiency when sampling at symbol rate. This can be improved through oversampling by exploiting the signal distortion caused by the transmission channel. This paper analyzes the achievable data rate of band-limited communications channels that are subject to additive noise and inter-symbol-interference with 1-bit quantization and oversampling at ...
An efficient bit-loading algorithm with peak BER constraint for the band-extended PLC
Maiga, Ali; Baudais, Jean-Yves; Hélard, Jean-François
2009-01-01
ISBN: 978-1-4244-2936-3 International audience Powerline communications (PLC) have become a viable local area network (LAN) solution for in-home networks. In order to achieve high bit rate over powerline, the current technology bandwidth is increased up to 100 MHz within the European project OMEGA. In this paper, an efficient bit-loading algorithm with peak BER constraint is proposed. This algorithm tries to maximize the overall data rate based on linear precoded discrete multitone (LP-...
A single-ended 10-bit 200 kS/s 607 μW SAR ADC with an auto-zeroing offset cancellation technique
Weiru, Gu; Yimin, Wu; Fan, Ye; Junyan, Ren
2015-10-01
This paper presents a single-ended 8-channel 10-bit 200 kS/s 607 μW synchronous successive approximation register (SAR) analog-to-digital converter (ADC) using HLMC 55 nm low leakage (LL) CMOS technology with a 3.3 V/1.2 V supply voltage. In conventional binary-encoded SAR ADCs the total capacitance grows exponentially with resolution. In this paper a CR hybrid DAC is adopted to reduce both capacitance and core area. The capacitor array resolves 4 bits and the other 6 bits are resolved by the resistor array. The 10-bit data is acquired by thermometer encoding to reduce the probability of DNL errors which are typically present in binary weighted architectures. This paper uses an auto-zeroing offset cancellation technique that can reduce the offset to 0.286 mV. The prototype chip realized the 10-bit SAR ADC fabricated in HLMC 55 nm CMOS technology with a core area of 167 × 87 μm2. It shows a sampling rate of 200 kS/s and low power dissipation of 607 μW operates at a 3.3 V analog supply voltage and a 1.2 V digital supply voltage. At the input frequency of 10 kHz the signal-to-noise-and-distortion ratio (SNDR) is 60.1 dB and the spurious-free dynamic range (SFDR) is 68.1 dB. The measured DNL is +0.37/-0.06 LSB and INL is +0.58/-0.22 LSB. Project supported by the National Science and Technology Support Program of China (No. 2012BAI13B07) and the National Science and Technology Major Project of China (No.2012ZX03001020-003).
A single-ended 10-bit 200 kS/s 607 μW SAR ADC with an auto-zeroing offset cancellation technique
International Nuclear Information System (INIS)
This paper presents a single-ended 8-channel 10-bit 200 kS/s 607 μW synchronous successive approximation register (SAR) analog-to-digital converter (ADC) using HLMC 55 nm low leakage (LL) CMOS technology with a 3.3 V/1.2 V supply voltage. In conventional binary-encoded SAR ADCs the total capacitance grows exponentially with resolution. In this paper a CR hybrid DAC is adopted to reduce both capacitance and core area. The capacitor array resolves 4 bits and the other 6 bits are resolved by the resistor array. The 10-bit data is acquired by thermometer encoding to reduce the probability of DNL errors which are typically present in binary weighted architectures. This paper uses an auto-zeroing offset cancellation technique that can reduce the offset to 0.286 mV. The prototype chip realized the 10-bit SAR ADC fabricated in HLMC 55 nm CMOS technology with a core area of 167 × 87 μm2. It shows a sampling rate of 200 kS/s and low power dissipation of 607 μW operates at a 3.3 V analog supply voltage and a 1.2 V digital supply voltage. At the input frequency of 10 kHz the signal-to-noise-and-distortion ratio (SNDR) is 60.1 dB and the spurious-free dynamic range (SFDR) is 68.1 dB. The measured DNL is +0.37/−0.06 LSB and INL is +0.58/−0.22 LSB. (paper)
An improved adaptive bit allocation algorithm for OFDM system%一种改进的OFD M系统自适应比特分配算法
Institute of Scientific and Technical Information of China (English)
魏巍; 安文东
2014-01-01
An adaptive bit allocation algorithm based on Hughes-Hartogs algorithm is proposed in this paper to improve the shortage of greedy algorithm which requires a large number of itera-tions.Under the constraint of bit error rata and the total number of transmission bit,the im-proved algorithm firstly uses the Chow algorithm to allocate some of the bits,and then uses the greedy algorithm to allocate the remaining bits to each subcarrier.When minimizing the total power by this algorithm,iterations of this algorithm are significantly less than that by the greedy algorithm.Computer simulation results show that,with fixed transmission rate,the iterations number of this improved algorithm is 7 .4%~34% of that of the greedy algorithm,and the per-formance of this algorithm is very close to that of the greedy algorithm.%针对贪婪算法迭代次数多的不足，提出一种基于 Hughes-Hartogs 算法的自适应比特分配算法。在误比特率和传输比特总数限定下，先使用Chow算法对每个子载波进行比特初始分配，然后再把余下的比特通过贪婪算法分配到各个子载波上，使总功率达到最小。仿真结果表明，在传输比特数一定的情况下，改进贪婪算法的迭代次数仅是贪婪算法的7．4％～34％，并且在性能上十分逼近贪婪算法。
Conversion of an 8-bit to a 16-bit Soft-core RISC Processor
Directory of Open Access Journals (Sweden)
Ahmad Jamal Salim
2013-03-01
Full Text Available The demand for 8-bit processors nowadays is still going strong despite efforts by manufacturers in producing higher end microcontroller solutions to the mass market. Low-end processor offers a simple, low-cost and fast solution especially on I/O applications development in embedded system. However, due to architectural constraint, complex calculation could not be performed efficiently on 8-bit processor. This paper presents the conversion method from an 8-bit to a 16-bit Reduced Instruction Set Computer (RISC processor in a soft-core reconfigurable platform in order to extend its capability in handling larger data sets thus enabling intensive calculations process. While the conversion expands the data bus width to 16-bit, it also maintained the simple architecture design of an 8-bit processor.The expansion also provides more room for improvement to the processor’s performance. The modified architecture is successfully simulated in CPUSim together with its new instruction set architecture (ISA. Xilinx Virtex-6 platform is utilized to execute and verified the architecture. Results show that the modified 16-bit RISC architecture only required 17% more register slice on Field Programmable Gate Array (FPGA implementation which is a slight increase compared to the original 8-bit RISC architecture. A test program containing instruction sets that handle 16-bit data are also simulated and verified. As the 16-bit architecture is described as a soft-core, further modifications could be performed in order to customize the architecture to suit any specific applications.
Improved Design of Unequal Error Protection LDPC Codes
Directory of Open Access Journals (Sweden)
Sandberg Sara
2010-01-01
Full Text Available We propose an improved method for designing unequal error protection (UEP low-density parity-check (LDPC codes. The method is based on density evolution. The degree distribution with the best UEP properties is found, under the constraint that the threshold should not exceed the threshold of a non-UEP code plus some threshold offset. For different codeword lengths and different construction algorithms, we search for good threshold offsets for the UEP code design. The choice of the threshold offset is based on the average a posteriori variable node mutual information. Simulations reveal the counter intuitive result that the short-to-medium length codes designed with a suitable threshold offset all outperform the corresponding non-UEP codes in terms of average bit-error rate. The proposed codes are also compared to other UEP-LDPC codes found in the literature.
Factorization of a 768-bit RSA modulus
Kleinjung, T; Aoki, K.; Franke, J.; Lenstra, A.K.; Thomee, E; Bos, Joppe,; Gaudry, P.; Kruppa, Alexander; Montgomery, P. L.; Osvik, D.A.; Riele, te, H.; Timofeev, Andrey; Zimmermann, P; Rabin, T.
2010-01-01
The original publication is available at www.springerlink.com International audience This paper reports on the factorization of the 768-bit number RSA-768 by the number field sieve factoring method and discusses some implications for RSA.
A Simple Quantum Bit Commitment Protocol
Sheikholeslam, S Arash
2011-01-01
In this paper, we introduce a new quantum bit commitment method which is secure against entanglement attacks. Some cheating strategies are discussed and shown to be ineffective against the proposed method.
FastBit: Interactively Searching Massive Data
Energy Technology Data Exchange (ETDEWEB)
Wu, Kesheng; Ahern, Sean; Bethel, E. Wes; Chen, Jacqueline; Childs, Hank; Cormier-Michel, Estelle; Geddes, Cameron; Gu, Junmin; Hagen, Hans; Hamann, Bernd; Koegler, Wendy; Lauret, Jerome; Meredith, Jeremy; Messmer, Peter; Otoo, Ekow; Perevoztchikov, Victor; Poskanzer, Arthur; Prabhat,; Rubel, Oliver; Shoshani, Arie; Sim, Alexander; Stockinger, Kurt; Weber, Gunther; Zhang, Wei-Ming
2009-06-23
As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.
BitTorrent's Mainline DHT Security Assessment
Timpanaro, Juan Pablo; Cholez, Thibault; Chrisment, Isabelle; Festor, Olivier
2011-01-01
BitTorrent is a widely deployed P2P file sharing protocol, extensively used to distribute digital content and software updates, among others. Recent actions against torrent and tracker repositories have fostered the move towards a fully distributed solution based on a distributed hash table to support both torrent search and tracker implementation. In this paper we present a security study of the main decentralized tracker in BitTorrent, commonly known as the Mainline DHT.We show that the lac...
Optimal bounds for quantum bit commitment
Chailloux, André
2011-01-01
Bit commitment is a fundamental cryptographic primitive with numerous applications. Quantum information allows for bit commitment schemes in the information theoretic setting where no dishonest party can perfectly cheat. The previously best-known quantum protocol by Ambainis achieved a cheating probability of at most 3/4[Amb01]. On the other hand, Kitaev showed that no quantum protocol can have cheating probability less than 1/sqrt{2} [Kit03] (his lower bound on coin flipping can be easily extended to bit commitment). Closing this gap has since been an important and open question. In this paper, we provide the optimal bound for quantum bit commitment. We first show a lower bound of approximately 0.739, improving Kitaev's lower bound. We then present an optimal quantum bit commitment protocol which has cheating probability arbitrarily close to 0.739. More precisely, we show how to use any weak coin flipping protocol with cheating probability 1/2 + eps in order to achieve a quantum bit commitment protocol with ...
Asymptotic Properties of One-Bit Distributed Detection with Ordered Transmissions
Braca, Paolo; Matta, Vincenzo
2011-01-01
Consider a sensor network made of remote nodes connected to a common fusion center. In a recent work Blum and Sadler [1] propose the idea of ordered transmissions -sensors with more informative samples deliver their messages first- and prove that optimal detection performance can be achieved using only a subset of the total messages. Taking to one extreme this approach, we show that just a single delivering allows making the detection errors as small as desired, for a sufficiently large network size: a one-bit detection scheme can be asymptotically consistent. The transmission ordering is based on the modulus of some local statistic (MO system). We derive analytical results proving the asymptotic consistency and, for the particular case that the local statistic is the log-likelihood (\\ell-MO system), we also obtain a bound on the error convergence rate. All the theorems are proved under the general setup of random number of sensors. Computer experiments corroborate the analysis and address typical examples of...
Institute of Scientific and Technical Information of China (English)
孙宇[; 李纯莲; 钟经华
2016-01-01
Braille error tolerance rate includes two aspects: the scheme error tolerance rate corresponding to Braille scheme and the spelling error tolerance rate corresponding to readers.In order to reasonably evaluate the spelling efficiency of Chinese Braille scheme and further improve it, this paper presents a concept of scheme error tolerance rate and makes a statistical analysis on it.The results show that the error tolerance rate is objective necessary and controllable, pointing out the Braille scheme with the greater error tolerance rate will be easier to use and popularize.Finally, it gives an optimization function of scheme error tolerance rate, which is helpful to improve the current Braille scheme.Meanwhile, it discusses the influences of readers′psychological factors on Braille error tolerance rate when reading and reveals the relations of mutual influence, mutual promotion and mutual compensation between the scheme error tolerance rate of Braille scheme and the spelling error tolerance rate of Braille readers.%盲文容错率包括盲文方案的方案容错率和盲文读者的拼读容错率两个方面。为了合理评估汉语盲文方案的拼读效率、进一步改进盲文方案，提出盲文方案的方案容错率概念并对其进行统计学分析，得出容错率存在的必然性和可控性，指出容错率较大的盲文方案较容易使用和推广，最后给出了盲文方案容错率的优化函数以利于改进现有盲文方案。同时还分析了读者在阅读盲文时，其心理因素对盲文容错率的影响，揭示了盲文方案的方案容错率和盲文读者的拼读容错率之间相互影响、相互促进、相互代偿的关系。
Influence of pseudorandom bit format on the direct modulation performance of semiconductor lasers
Indian Academy of Sciences (India)
Moustafa Ahmed; Safwat W Z Mahmoud; Alaa A Mohmoud
2012-12-01
This paper investigates the direct gigabit modulation characteristics of semiconductor lasers using the return to zero (RZ) and non-return to zero (NRZ) formats. The modulation characteristics include the frequency chirp, eye diagram, and turn-on jitter (TOJ). The differences in the relative contributions of the intrinsic noise of the laser and the pseudorandom bit-pattern effect to the modulation characteristics are presented. We introduce an approximate estimation to the transient properties that control the digital modulation performance, namely, the modulation bit rate and the minimum (setting) bit rate required to yield a modulated laser signal free from the bit pattern effect. The results showed that the frequency chirp increases with the increase of the modulation current under both RZ and NRZ formats, and decreases remarkably with the increase of the bias current. The chirp is higher under the RZ modulation format than under the NRZ format. When the modulation bit rate is higher than the setting bit rate of the relaxation oscillation, the laser exhibits enhanced TOJ and the eye diagram is partially closed. TOJ decreases with the increase of the bias and/or modulation current for both formats of modulation.
Das, Bikramaditya; 10.5121/jgraphhoc.2010.2104
2010-01-01
For high data rate ultra wideband communication system, performance comparison of Rake, MMSE and Rake-MMSE receivers is attempted in this paper. Further a detail study on Rake-MMSE time domain equalizers is carried out taking into account all the important parameters such as the effect of the number of Rake fingers and equalizer taps on the error rate performance. This receiver combats inter-symbol interference by taking advantages of both the Rake and equalizer structure. The bit error rate performances are investigated using MATLAB simulation on IEEE 802.15.3a defined UWB channel models. Simulation results show that the bit error rate probability of Rake-MMSE receiver is much better than Rake receiver and MMSE equalizer. Study on non-line of sight indoor channel models illustrates that bit error rate performance of Rake-MMSE (both LE and DFE) improves for CM3 model with smaller spread compared to CM4 channel model. It is indicated that for a MMSE equalizer operating at low to medium SNR values, the number o...
Bits extraction for palmprint template protection with Gabor magnitude and multi-bit quantization
Mu, Meiru; Shao, Xiaoying; Ruan, QiuQi; Spreeuwers, Luuk; Veldhuis, Raymond
2013-01-01
In this paper, we propose a method of fixed-length binary string extraction (denoted by LogGM_DROBA) from low-resolution palmprint image for developing palmprint template protection technology. In order to extract reliable (stable and discriminative) bits, multi-bit equal-probability-interval quanti
Low complexity bit loading algorithm for OFDM system
Institute of Scientific and Technical Information of China (English)
Yang Yu; Sha Xuejun; Zhang Zhonghua
2006-01-01
A new approach to loading for orthogonal frequency division multiplexing (OFDM) system is proposed, this bit-loading algorithm assigns bits to different subchannels in order to minimize the transmit energy. In the algorithm,first most bit are allocated to each subchannels according to channel condition, Shannon formula and QoS require of the user, then the residual bit are allocated to the subchannels bit by bit. In this way the algorithm is efficient while calculation is less complex. This is the first time to load bits with the scale following Shannon formula and the algorithm is of O (4N) complexity.
Institute of Scientific and Technical Information of China (English)
薛留根; 蔡国梁
2000-01-01
In this paper, the normal approximation rate and the random weighting approximation rate of error distribution of the kernel estimator of conditional density function f(y|x) are studied. The results may be used to construct the confidence interval of f(y|x).
A General Rate K/N Convolutional Decoder Based on Neural Networks with Stopping Criterion
Directory of Open Access Journals (Sweden)
Johnny W. H. Kao
2009-01-01
Full Text Available A novel algorithm for decoding a general rate K/N convolutional code based on recurrent neural network (RNN is described and analysed. The algorithm is introduced by outlining the mathematical models of the encoder and decoder. A number of strategies for optimising the iterative decoding process are proposed, and a simulator was also designed in order to compare the Bit Error Rate (BER performance of the RNN decoder with the conventional decoder that is based on Viterbi Algorithm (VA. The simulation results show that this novel algorithm can achieve the same bit error rate and has a lower decoding complexity. Most importantly this algorithm allows parallel signal processing, which increases the decoding speed and accommodates higher data rate transmission. These characteristics are inherited from a neural network structure of the decoder and the iterative nature of the algorithm, that outperform the conventional VA algorithm.
Institute of Scientific and Technical Information of China (English)
WUXiaojun; YINQinye; ZENGMing; LIXing; WANGJilong
2004-01-01
In very high data-rate wireless application scenarios, Multicarrier code-division multiple access (MC-CDMA) systems including Serial-to-parallel (S/P) converting operation are more applicable. We name them as modified MC-CDMA systems. In this paper, we focus on the blind channel estimation problem of these modified MC-CDMA systems on uplink. Because we can regard each subcarrier in multicarrier communications as a channel, the modified MC-CDMA system accordingly can become a multichannel system. Upon this understanding, we model the multiuser modified MC-CDMA system as a Multiple-input multiple-output (MIMO) system. Successively, based on subspace decomposition technique, we derive a novel blind estimation scheme of uplink channels for multiuser modified MC-CDMA systems. Furthermore, based on perturbation techniques, we derive the analytical approximation of the Mean-squared error (MSE) of this blind channel estimation scheme. Extensive computer simulations illustrate the performance of the proposed algorithm, and simulation results also verify the tightness of the MSE approximation.
Factorization of a 512-bit RSA modulus
Cavallar, S.H.; Lioen, W.M.; Riele, te, H.; Dodson, B.; Lenstra, A.K.; Montgomery, P. L.; Murphy, B.
2000-01-01
On August 22, 1999, we completed the factorization of the 512--bit 155--digit number RSA--155 with the help of the Number Field Sieve factoring method (NFS). This is a new record for factoring general numbers. Moreover, 512--bit RSA keys are frequently used for the protection of electronic commerce---at least outside the USA---so this factorization represents a breakthrough in research on RSA--based systems. The previous record, factoring the 140--digit number RSA--140, was established on Feb...
Fixed-Length Error Resilient Code and Its Application in Video Coding
Institute of Scientific and Technical Information of China (English)
FANChen; YANGMing; CUIHuijuan; TANGKun
2003-01-01
Since popular entropy coding techniques such as Variable-length code (VLC) tend to cause severe error propagation in noisy environments, an error resilient entropy coding technique named Fixed-length error resilient code (FLERC) is proposed to mitigate the problem. It is found that even for a non-stationary source, the probability of error propagation could be minimized through introducing intervals into the codeword space of the fixed-length codes. FLERC is particularly suitable for the entropy coding for video signals in error-prone environments, where a little distortion is tolerable, but severe error propagation would lead to fatal consequences. An iterative construction algorithm for FLERC is presented in this paper. In addition, FLERC is adopted instead of VLC as the entropy coder of the DCT coefficients in H.263++Data partitioning slice (DPS) mode, and tested on noisy channels. The simulation results show that this scheme outperforms the scheme of H.263++ combined with FEC when the channel noise is highly extensive, since the error propagation is effectively suppressed by using FLERC. Moreover, it is observed that the reconstructed video quality degrades gracefully as the bit error rate increases.
Robust Face Recognition using Voting by Bit-plane Images based on Sparse Representation
Directory of Open Access Journals (Sweden)
Dongmei Wei
2015-08-01
Full Text Available Plurality voting is widely employed as combination strategies in pattern recognition. As a technology proposed recently, sparse representation based classification codes the query image as a sparse linear combination of entire training images and classifies the query sample class by class exploiting the class representation error. In this paper, an improvement face recognition approach using sparse representation and plurality voting based on the binary bit-plane images is proposed. After being equalized, gray images are decomposed into eight bit-plane images, sparse representation based classification is exploited respectively on the five bit-plane images that have more discrimination information. Finally, the true identity of query image is voted by these five identities obtained. Experiment results shown that this proposed approach is preferable both in recognition accuracy and in recognition speed.
Directory of Open Access Journals (Sweden)
Bikramaditya Das
2010-03-01
Full Text Available For high data rate ultra wideband communication system, performance comparison of Rake, MMSE andRake-MMSE receivers is attempted in this paper. Further a detail study on Rake-MMSE time domainequalizers is carried out taking into account all the important parameters such as the effect of the numberof Rake fingers and equalizer taps on the error rate performance. This receiver combats inter-symbolinterference by taking advantages of both the Rake and equalizer structure. The bit error rateperformances are investigated using MATLAB simulation on IEEE 802.15.3a defined UWB channelmodels. Simulation results show that the bit error rate probability of Rake-MMSE receiver is much betterthan Rake receiver and MMSE equalizer. Study on non-line of sight indoor channel models illustratesthat bit error rate performance of Rake-MMSE (both LE and DFE improves for CM3 model with smallerspread compared to CM4 channel model. It is indicated that for a MMSE equalizer operating at low tomedium SNR values, the number of Rake fingers is the dominant factor to improve system performance,while at high SNR values the number of equalizer taps plays a more significant role in reducing the errorrate.
Lan, Ching-Fu; Xiong, Zixiang; Narayanan, Krishna R
2006-07-01
The common practice for achieving unequal error protection (UEP) in scalable multimedia communication systems is to design rate-compatible punctured channel codes before computing the UEP rate assignments. This paper proposes a new approach to designing powerful irregular repeat accumulate (IRA) codes that are optimized for the multimedia source and to exploiting the inherent irregularity in IRA codes for UEP. Using the end-to-end distortion due to the first error bit in channel decoding as the cost function, which is readily given by the operational distortion-rate function of embedded source codes, we incorporate this cost function into the channel code design process via density evolution and obtain IRA codes that minimize the average cost function instead of the usual probability of error. Because the resulting IRA codes have inherent UEP capabilities due to irregularity, the new IRA code design effectively integrates channel code optimization and UEP rate assignments, resulting in source-optimized channel coding or joint source-channel coding. We simulate our source-optimized IRA codes for transporting SPIHT-coded images over a binary symmetric channel with crossover probability p. When p = 0.03 and the channel code length is long (e.g., with one codeword for the whole 512 x 512 image), we are able to operate at only 9.38% away from the channel capacity with code length 132380 bits, achieving the best published results in terms of average peak signal-to-noise ratio (PSNR). Compared to conventional IRA code design (that minimizes the probability of error) with the same code rate, the performance gain in average PSNR from using our proposed source-optimized IRA code design is 0.8759 dB when p = 0.1 and the code length is 12800 bits. As predicted by Shannon's separation principle, we observe that this performance gain diminishes as the code length increases. PMID:16830898
RELAY ASSISTED TRANSMISSSION WITH BIT-INTERLEAVED CODED MODULATION
Institute of Scientific and Technical Information of China (English)
Meng Qingmin; You Xiaohu; John Boyer
2006-01-01
We investigate an adaptive cooperative protocol in a Two-Hop-Relay (THR) wireless system that combines the following: (1) adaptive relaying based on repetition coding; (2) single or two transmit antennas and one receive antenna configurations for all nodes, each using high order constellation; (3) Bit-Interleaved Coded Modulation (BICM). We focus on a simple decoded relaying (i.e. no error correcting at a relay node)and simple signal quality thresholds for relaying. Then the impact of the two simple thresholds on the system performance is studied. Our results suggest that compared with the traditional scheme for direct transmission,the proposed scheme can increase average throughput in high spectral efficiency region with low implementation-cost at the relay.
Quantum error-correcting codes need not completely reveal the error syndrome
Shor, P W; Shor, Peter W; Smolin, John A
1996-01-01
Quantum error-correcting codes so far proposed have not been able to work in the presence of noise levels which introduce greater than one bit of entropy per qubit sent through the quantum channel. This has been because all such codes either find the complete error syndrome of the noise or trivially map onto such codes. We describe a code which does not find complete information on the noise and can be used for reliable transmission of quantum information through channels which introduce more than one bit of entropy per transmitted bit. In the case of the depolarizing ``Werner'' channel our code can be used in a channel of fidelity .8096 while the best existing code worked only down to .8107.
Algorithm of 32-bit Data Transmission Among Microcontrollers Through an 8-bit Port
Directory of Open Access Journals (Sweden)
Midriem Mirdanies
2015-12-01
Full Text Available This paper proposes an algorithm for 32-bit data transmission among microcontrollers through one 8-bit port. This method was motivated by a need to overcome limitations of microcontroller I/O as well as to fulfill the requirement of data transmission which is more than 10 bits. In this paper, the use of an 8-bit port has been optimized for 32-bit data transmission using unsigned long integer, long integer, and float types. Thirty-two bit data is extracted intobinary number, then sent through a series of 8-bit ports by transmitter microcontroller. At receiver microcontroller, the binary data received through 8-bit port is reconverted into 32 bits with the same data type. The algorithm has been implemented and tested using C language in ATMega32A microcontroller. Experiments have been done using two microcontrollers as well as four microcontrollers in the parallel, tree, and series connections. Based on the experiments, it is known that the data transmitted can be accurately received without data loss. Maximum transmission times among two microcontrollers for unsigned long integer, long integer, and float are 630 μs, 1,880 μs, and 7,830 μs, respectively. Maximum transmission times using four microcontrollers in parallel connection are the same as those using two microcontrollers, while in series connection are 1,930 μs for unsigned long integer, 5,640 μs for long integer, and 23,540 μs for float. The maximum transmission times of tree connection is close to those of the parallel connection. These results prove that the algorithm works well.
On the BER and capacity analysis of MIMO MRC systems with channel estimation error
Yang, Liang
2011-10-01
In this paper, we investigate the effect of channel estimation error on the capacity and bit-error rate (BER) of a multiple-input multiple-output (MIMO) transmit maximal ratio transmission (MRT) and receive maximal ratio combining (MRC) systems over uncorrelated Rayleigh fading channels. We first derive the ergodic (average) capacity expressions for such systems when power adaptation is applied at the transmitter. The exact capacity expression for the uniform power allocation case is also presented. Furthermore, to investigate the diversity order of MIMO MRT-MRC scheme, we derive the BER performance under a uniform power allocation policy. We also present an asymptotic BER performance analysis for the MIMO MRT-MRC system with multiuser diversity. The numerical results are given to illustrate the sensitivity of the main performance to the channel estimation error and the tightness of the approximate cutoff value. © 2011 IEEE.
Linear, Constant-rounds Bit-decomposition
DEFF Research Database (Denmark)
Reistad, Tord; Toft, Tomas
2010-01-01
When performing secure multiparty computation, tasks may often be simple or difficult depending on the representation chosen. Hence, being able to switch representation efficiently may allow more efficient protocols. We present a new protocol for bit-decomposition: converting a ring element x ∈ ℤ...
The Economics of BitCoin Price Formation
Ciaian, Pavel; Rajcaniova, Miroslava; Kancs, d'Artis
2014-01-01
This paper analyses the relationship between BitCoin price and supply-demand fundamentals of BitCoin, global macro-financial indicators and BitCoin’s attractiveness for investors. Using daily data for the period 2009-2014 and applying time-series analytical mechanisms, we find that BitCoin market fundamentals and BitCoin’s attractiveness for investors have a significant impact on BitCoin price. Our estimates do not support previous findings that the macro-financial developments are driving Bi...
Variable bit rate video traffic modeling by multiplicative multifractal model
Institute of Scientific and Technical Information of China (English)
Huang Xiaodong; Zhou Yuanhua; Zhang Rongfu
2006-01-01
Multiplicative multifractal process could well model video traffic. The multiplier distributions in the multiplicative multifractal model for video traffic are investigated and it is found that Gaussian is not suitable for describing the multipliers on the small time scales. A new statistical distribution-symmetric Pareto distribution is introduced. It is applied instead of Gaussian for the multipliers on those scales. Based on that, the algorithm is updated so that symmetric pareto distribution and Gaussian distribution are used to model video traffic but on different time scales. The simulation results demonstrate that the algorithm could model video traffic more accurately.
Fast optical signal processing in high bit rate OTDM systems
DEFF Research Database (Denmark)
Poulsen, Henrik Nørskov; Jepsen, Kim Stokholm; Clausen, Anders;
1998-01-01
As all-optical signal processing is maturing, optical time division multiplexing (OTDM) has also gained interest for simple networking in high capacity backbone networks. As an example of a network scenario we show an OTDM bus interconnecting another OTDM bus, a single high capacity user...
High bit rate BPSK signals in shallow water environments
Robert, M.K.; Walree, P.A. van
2003-01-01
Lately, acoustic data transfer has become an important topic in underwater environments. Several acoustic communication signals e.g. spread spectrum or frequency shift keying signals have been extensively developed. However, in challenging environments, it is still difficult to obtain robust acousti
Error control for reliable digital data transmission and storage systems
Costello, D. J., Jr.; Deng, R. H.
1985-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.
... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...
Directory of Open Access Journals (Sweden)
Fabien Hernandez
Full Text Available To assess the impact of the implementation of a Computerized Physician Order Entry (CPOE associated with a pharmaceutical checking of medication orders on medication errors in the 3 stages of drug management (i.e. prescription, dispensing and administration in an orthopaedic surgery unit.A before-after observational study was conducted in the 66-bed orthopaedic surgery unit of a teaching hospital (700 beds in Paris France. Direct disguised observation was used to detect errors in prescription, dispensing and administration of drugs, before and after the introduction of computerized prescriptions. Compliance between dispensing and administration on the one hand and the medical prescription on the other hand was studied. The frequencies and types of errors in prescribing, dispensing and administration were investigated.During the pre and post-CPOE period (two days for each period 111 and 86 patients were observed, respectively, with corresponding 1,593 and 1,388 prescribed drugs. The use of electronic prescribing led to a significant 92% decrease in prescribing errors (479/1593 prescribed drugs (30.1% vs 33/1388 (2.4%, p < 0.0001 and to a 17.5% significant decrease in administration errors (209/1222 opportunities (17.1% vs 200/1413 (14.2%, p < 0.05. No significant difference was found in regards to dispensing errors (430/1219 opportunities (35.3% vs 449/1407 (31.9%, p = 0.07.The use of CPOE and a pharmacist checking medication orders in an orthopaedic surgery unit reduced the incidence of medication errors in the prescribing and administration stages. The study results suggest that CPOE is a convenient system for improving the quality and safety of drug management.
Progressive and Error-Resilient Transmission Strategies for VLC Encoded Signals over Noisy Channels
Directory of Open Access Journals (Sweden)
Guillemot Christine
2006-01-01
Full Text Available This paper addresses the issue of robust and progressive transmission of signals (e.g., images, video encoded with variable length codes (VLCs over error-prone channels. This paper first describes bitstream construction methods offering good properties in terms of error resilience and progressivity. In contrast with related algorithms described in the literature, all proposed methods have a linear complexity as the sequence length increases. The applicability of soft-input soft-output (SISO and turbo decoding principles to resulting bitstream structures is investigated. In addition to error resilience, the amenability of the bitstream construction methods to progressive decoding is considered. The problem of code design for achieving good performance in terms of error resilience and progressive decoding with these transmission strategies is then addressed. The VLC code has to be such that the symbol energy is mainly concentrated on the first bits of the symbol representation (i.e., on the first transitions of the corresponding codetree. Simulation results reveal high performance in terms of symbol error rate (SER and mean-square reconstruction error (MSE. These error-resilience and progressivity properties are obtained without any penalty in compression efficiency. Codes with such properties are of strong interest for the binarization of -ary sources in state-of-the-art image, and video coding systems making use of, for example, the EBCOT or CABAC algorithms. A prior statistical analysis of the signal allows the construction of the appropriate binarization code.
HIGH-POWER TURBODRILL AND DRILL BIT FOR DRILLING WITH COILED TUBING
Energy Technology Data Exchange (ETDEWEB)
Robert Radtke; David Glowka; Man Mohan Rai; David Conroy; Tim Beaton; Rocky Seale; Joseph Hanna; Smith Neyrfor; Homer Robertson
2008-03-31
Commercial introduction of Microhole Technology to the gas and oil drilling industry requires an effective downhole drive mechanism which operates efficiently at relatively high RPM and low bit weight for delivering efficient power to the special high RPM drill bit for ensuring both high penetration rate and long bit life. This project entails developing and testing a more efficient 2-7/8 in. diameter Turbodrill and a novel 4-1/8 in. diameter drill bit for drilling with coiled tubing. The high-power Turbodrill were developed to deliver efficient power, and the more durable drill bit employed high-temperature cutters that can more effectively drill hard and abrasive rock. This project teams Schlumberger Smith Neyrfor and Smith Bits, and NASA AMES Research Center with Technology International, Inc (TII), to deliver a downhole, hydraulically-driven power unit, matched with a custom drill bit designed to drill 4-1/8 in. boreholes with a purpose-built coiled tubing rig. The U.S. Department of Energy National Energy Technology Laboratory has funded Technology International Inc. Houston, Texas to develop a higher power Turbodrill and drill bit for use in drilling with a coiled tubing unit. This project entails developing and testing an effective downhole drive mechanism and a novel drill bit for drilling 'microholes' with coiled tubing. The new higher power Turbodrill is shorter, delivers power more efficiently, operates at relatively high revolutions per minute, and requires low weight on bit. The more durable thermally stable diamond drill bit employs high-temperature TSP (thermally stable) diamond cutters that can more effectively drill hard and abrasive rock. Expectations are that widespread adoption of microhole technology could spawn a wave of 'infill development' drilling of wells spaced between existing wells, which could tap potentially billions of barrels of bypassed oil at shallow depths in mature producing areas. At the same time, microhole
System Measures Errors Between Time-Code Signals
Cree, David; Venkatesh, C. N.
1993-01-01
System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.
Digital dual-rate burst-mode receiver for 10G and 1G coexistence in optical access networks.
Mendinueta, José Manuel Delgado; Mitchell, John E; Bayvel, Polina; Thomsen, Benn C
2011-07-18
A digital dual-rate burst-mode receiver, intended to support 10 and 1 Gb/s coexistence in optical access networks, is proposed and experimentally characterized. The receiver employs a standard DC-coupled photoreceiver followed by a 20 GS/s digitizer and the detection of the packet presence and line-rate is implemented in the digital domain. A polyphase, 2 samples-per-bit digital signal processing algorithm is then used for efficient clock and data recovery of the 10/1.25 Gb/s packets. The receiver performance is characterized in terms of sensitivity and dynamic range under burst-mode operation for 10/1.25 Gb/s intensity modulated data in terms of both the packet error rate (PER) and the payload bit error rate (pBER). The impact of packet preamble lengths of 16, 32, 48, and 64 bits, at 10 Gb/s, on the receiver performance is investigated. We show that there is a trade-off between pBER and PER that is limited by electrical noise and digitizer clipping at low and high received powers, respectively, and that a 16/2-bit preamble at 10/1.25 Gb/s is sufficient to reliably detect packets at both line-rates over a burst-to-burst dynamic range of 14,5 dB with a sensitivity of -18.5 dBm at 10 Gb/s. PMID:21934767
Digital dual-rate burst-mode receiver for 10G and 1G coexistence in optical access networks
Delgado Mendinueta, José Manuel; Mitchell, John E.; Bayvel, Polina; Thomsen, Benn C.
2011-07-01
A digital dual-rate burst-mode receiver, intended to support 10 and 1 Gb/s coexistence in optical access networks, is proposed and experimentally characterized. The receiver employs a standard DC-coupled photoreceiver followed by a 20 GS/s digitizer and the detection of the packet presence and line-rate is implemented in the digital domain. A polyphase, 2 samples-per-bit digital signal processing algorithm is then used for efficient clock and data recovery of the 10/1.25 Gb/s packets. The receiver performance is characterized in terms of sensitivity and dynamic range under burst-mode operation for 10/1.25 Gb/s intensity modulated data in terms of both the packet error rate (PER) and the payload bit error rate (pBER). The impact of packet preamble lengths of 16, 32, 48, and 64 bits, at 10 Gb/s, on the receiver performance is investigated. We show that there is a trade-off between pBER and PER that is limited by electrical noise and digitizer clipping at low and high received powers, respectively, and that a 16/2-bit preamble at 10/1.25 Gb/s is sufficient to reliably detect packets at both line-rates over a burst-to-burst dynamic range of 14,5dB with a sensitivity of -18.5dBm at 10 Gb/s.
Supersymmetric quantum mechanics for string-bits
International Nuclear Information System (INIS)
The authors develop possible versions of supersymmetric single particle quantum mechanics, with application to superstring-bit models in view. The authors focus principally on space dimensions d = 1,2,4,8, the transverse dimensionalities of superstring in 3, 4, 7, 10 space-time dimensions. These are the cases for which classical superstring makes sense, and also the values of d for which Hooke's force law is compatible with the simplest superparticle dynamics. The basic question they address is: when is it possible to replace such harmonic force laws with more general ones, including forces which vanish at large distances? This is an important question because forces between string-bits that do not fall off with distance will almost certainly destroy cluster decomposition. They show that the answer is affirmative for d = 1,2, negative for d = 8, and so far inconclusive for d = 4
Global Networks of Trade and Bits
Riccaboni, Massimo; Schiavo, Stefano
2012-01-01
Considerable efforts have been made in recent years to produce detailed topologies of the Internet. Although Internet topology data have been brought to the attention of a wide and somewhat diverse audience of scholars, so far they have been overlooked by economists. In this paper, we suggest that such data could be effectively treated as a proxy to characterize the size of the "digital economy" at country level and outsourcing: thus, we analyse the topological structure of the network of trade in digital services (trade in bits) and compare it with that of the more traditional flow of manufactured goods across countries. To perform meaningful comparisons across networks with different characteristics, we define a stochastic benchmark for the number of connections among each country-pair, based on hypergeometric distribution. Original data are thus filtered by means of different thresholds, so that we only focus on the strongest links, i.e., statistically significant links. We find that trade in bits displays...
Generating bit reversed numbers for calculating fast fourier transform
Digital Repository Service at National Institute of Oceanography (India)
Suresh, T.
. The bit reversed numbers generated on execution of the program also are given. CONCLUSION The program listed using standard FORTRAN has been executed on an ND 570 computer to generate Short Note 351 bit reversed numbers for a given number of bits... format(4(t514,/)) close(l0) stop Execution of program on Norsk Data computer, ND 570: @ND BITRV 4 Number of bits 0 8 4 12 2 10 6 14 1 9 5 13 3 11 7 15 ...
De-anonymizing BitTorrent Users on Tor
Le Blond, Stevens; Manils, Pere; Chaabane, Abdelberi; Kaafar, Mohamed Ali; Legout, Arnaud; Castellucia, Claude; Dabbous, Walid
2010-01-01
Some BitTorrent users are running BitTorrent on top of Tor to preserve their privacy. In this extended abstract, we discuss three different attacks to reveal the IP address of BitTorrent users on top of Tor. In addition, we exploit the multiplexing of streams from different applications into the same circuit to link non-BitTorrent applications to revealed IP addresses.
Verilog Implementation of 32-Bit CISC Processor
Directory of Open Access Journals (Sweden)
P.Kanaka Sirisha
2016-04-01
Full Text Available The Project deals with the design of the 32-Bit CISC Processor and modeling of its components using Verilog language. The Entire Processor uses 32-Bit bus to deal with all the registers and the memories. This Processor implements various arithmetic, logical, Data Transfer operations etc., using variable length instructions, which is the core property of the CISC Architecture. The Processor also supports various addressing modes to perform a 32-Bit instruction. Our Processor uses Harvard Architecture (i.e., to have a separate program and data memory and hence has different buses to negotiate with the Program Memory and Data Memory individually. This feature enhances the speed of our processor. Hence it has two different Program Counters to point to the memory locations of the Program Memory and Data Memory.Our processor has ‘Instruction Queuing’ which enables it to save the time needed to fetch the instruction and hence increases the speed of operation. ‘Interrupt Service Routine’ is provided in our Processor to make it address the Interrupts.
1/N Perturbations in Superstring Bit Models
Thorn, Charles B
2015-01-01
We develop the 1/N expansion for stable string bit models, focusing on a model with bit creation operators carrying only transverse spinor indices a=1,...,s. At leading order (1/N=0), this model produces a (discretized) lightcone string with a "transverse space' of $s$ Grassmann worldsheet fields. Higher orders in the 1/N expansion are shown to be determined by the overlap of a single large closed chain (discretized string) with two smaller closed chains. In the models studied here, the overlap is not accompanied with operator insertions at the break/join point. Then the requirement that the discretized overlap have a smooth continuum limit leads to the critical Grassmann "dimension" of s=24. This "protostring", a Grassmann analog of the bosonic string, is unusual, because it has no large transverse dimensions. It is a string moving in one space dimension and there are neither tachyons nor massless particles. The protostring, derived from our pure spinor string bit model, has 24 Grassmann dimensions, 16 of wh...
NSC 800, 8-bit CMOS microprocessor
Suszko, S. F.
1984-01-01
The NSC 800 is an 8-bit CMOS microprocessor manufactured by National Semiconductor Corp., Santa Clara, California. The 8-bit microprocessor chip with 40-pad pin-terminals has eight address buffers (A8-A15), eight data address -- I/O buffers (AD(sub 0)-AD(sub 7)), six interrupt controls and sixteen timing controls with a chip clock generator and an 8-bit dynamic RAM refresh circuit. The 22 internal registers have the capability of addressing 64K bytes of memory and 256 I/O devices. The chip is fabricated on N-type (100) silicon using self-aligned polysilicon gates and local oxidation process technology. The chip interconnect consists of four levels: Aluminum, Polysi 2, Polysi 1, and P(+) and N(+) diffusions. The four levels, except for contact interface, are isolated by interlevel oxide. The chip is packaged in a 40-pin dual-in-line (DIP), side brazed, hermetically sealed, ceramic package with a metal lid. The operating voltage for the device is 5 V. It is available in three operating temperature ranges: 0 to +70 C, -40 to +85 C, and -55 to +125 C. Two devices were submitted for product evaluation by F. Stott, MTS, JPL Microprocessor Specialist. The devices were pencil-marked and photographed for identification.
The BitTorrent Anonymity Marketplace
Nielson, Seth James
2011-01-01
The very nature of operations in peer-to-peer systems such as BitTorrent exposes information about participants to their peers. Nodes desiring anonymity, therefore, often chose to route their peer-to-peer traffic through anonymity relays, such as Tor. Unfortunately, these relays have little incentive for contribution and struggle to scale with the high loads that P2P traffic foists upon them. We propose a novel modification for BitTorrent that we call the BitTorrent Anonymity Marketplace. Peers in our system trade in k swarms obscuring the actual intent of the participants. But because peers can cross-trade torrents, the k-1 cover traffic can actually serve a useful purpose. This creates a system wherein a neighbor cannot determine if a node actually wants a given torrent, or if it is only using it as leverage to get the one it really wants. In this paper, we present our design, explore its operation in simulation, and analyze its effectiveness. We demonstrate that the upload and download characteristics of c...
High Reproduction Rate versus Sexual Fidelity
Sousa, A.O.; de Oliveira, S. Moss
2000-01-01
We introduce fidelity into the bit-string Penna model for biological ageing and study the advantage of this fidelity when it produces a higher survival probability of the offspring due to paternal care. We attribute a lower reproduction rate to the faithful males but a higher death probability to the offspring of non-faithful males that abandon the pups to mate other females. The fidelity is considered as a genetic trait which is transmitted to the male offspring (with or without error). We s...
Development of experimental apparatus about reverse circulation bit
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A set of experimental apparatus on reverse circulation bit are developed, in order to lucubrate the mechanism of the new type reverse circulation bits, and the structure of the bits influencing the ability of taking core and carrying powder.Both the major structure of the equipment and the procession of experiment are described.
Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.
2010-01-01
This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…
Optimized H.264/AVC-Based Bit Stream Switching for Mobile Video Streaming
Directory of Open Access Journals (Sweden)
Liebl Günther
2006-01-01
Full Text Available We show the suitability of H.264/MPEG-4 AVC extended profile for wireless video streaming applications. In particular, we exploit the advanced bit stream switching capabilities using SP/SI pictures defined in the H.264/MPEG-4 AVC standard. For both types of switching pictures, optimized encoders are developed. We introduce a framework for dynamic switching and frame scheduling. For this purpose we define an appropriate abstract representation for media encoded for video streaming, as well as for the characteristics of wireless variable bit rate channels. The achievable performance gains over H.264/MPEG-4 AVC with constant bit rate (CBR encoding are shown for wireless video streaming over enhanced GPRS (EGPRS.
Choi, Woo Young; Han, Jae Hwan; Cha, Tae Min
2016-05-01
Multi-bit nano-electromechanical (NEM) nonvolatile memory cells such as T cells were proposed for higher memory density. However, they suffered from bit-to-bit interference (BI). In order to suppress BI without sacrificing cell size, this paper proposes zigzag T cell structures. The BI suppression of the proposed zigzag T cell is verified by finite-element modeling (FEM). Based on the FEM results, the design of zigzag T cells is optimized. PMID:27483893
Visible light communication using mobile-phone camera with data rate higher than frame rate.
Chow, Chi-Wai; Chen, Chung-Yen; Chen, Shih-Hao
2015-10-01
Complementary Metal-Oxide-Semiconductor (CMOS) image sensors are widely used in mobile-phone and cameras. Hence, it is attractive if these image sensors can be used as the visible light communication (VLC) receivers (Rxs). However, using these CMOS image sensors are challenging. In this work, we propose and demonstrate a VLC link using mobile-phone camera with data rate higher than frame rate of the CMOS image sensor. We first discuss and analyze the features of using CMOS image sensor as VLC Rx, including the rolling shutter effect, overlapping of exposure time of each row of pixels, frame-to-frame processing time gap, and also the image sensor "blooming" effect. Then, we describe the procedure of synchronization and demodulation. This includes file format conversion, grayscale conversion, column matrix selection avoiding blooming, polynomial fitting for threshold location. Finally, the evaluation of bit-error-rate (BER) is performed satisfying the forward error correction (FEC) limit. PMID:26480122
A novel bit-quad-based Euler number computing algorithm.
Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao
2015-01-01
The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.
Preventing twisting by regulating the bit load by hydromonitor effect
Energy Technology Data Exchange (ETDEWEB)
Nazirov, S.A.; Agishev, A.S.
1984-01-01
The problem is examined of reducing the twisting rate of a well during turbine drilling under complex geological conditions, without decreasing the mechanical drilling rate. An attempt is made to prevent bending of the bottom of the drilling tool when caverns are formed during drilling, when the elements of the rigid KNBK act as large-sized couplings. By using the hydromonitor effect, the rigid guide strings are crushed in the necessary direction without allowing bending and by deepening them into the jet-drilled well shaft. Insofar as on many fields well twisting occurs mainly in the drilling intervals formed by clay rocks, use of the hydromonitor effect on the bit is one of the optimal measures for preventing twistings.
Thermodynamics of Error Correction
Sartori, Pablo; Pigolotti, Simone
2015-10-01
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Optimality of Rate Balancing in Wireless Sensor Networks
Tarighati, Alla; Jalden, Joakim
2016-07-01
We consider the problem of distributed binary hypothesis testing in a parallel network topology where sensors independently observe some phenomenon and send a finite rate summary of their observations to a fusion center for the final decision. We explicitly consider a scenario under which (integer) rate messages are sent over an error free multiple access channel, modeled by a sum rate constraint at the fusion center. This problem was previously studied by Chamberland and Veeravalli, who provided sufficient conditions for the optimality of one bit sensor messages. Their result is however crucially dependent on the feasibility of having as many one bit sensors as the (integer) sum rate constraint of the multiple access channel, an assumption that can often not be satisfied in practice. This prompts us to consider the case of an a-priori limited number of sensors and we provide sufficient condition under which having no two sensors with rate difference more than one bit, so called rate balancing, is an optimal strategy with respect to the Bhattacharyya distance between the hypotheses at the input to the fusion center. We further discuss explicit observation models under which these sufficient conditions are satisfied.
Energy Technology Data Exchange (ETDEWEB)
Modeste Nguimdo, Romain, E-mail: Romain.Nguimdo@vub.ac.be [Applied Physics Research Group, APHY, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussel (Belgium); Tchitnga, Robert [Laboratory of Electronics, Automation and Signal Processing, Department of Physics, University of Dschang, P.O. Box 67, Dschang (Cameroon); Woafo, Paul [Laboratory of Modelling and Simulation in Engineering and Biological Physics, Faculty of Science, University of Yaoundé I, P.O. Box 812, Yaoundé (Cameroon)
2013-12-15
We numerically investigate the possibility of using a coupling to increase the complexity in simplest chaotic two-component electronic circuits operating at high frequency. We subsequently show that complex behaviors generated in such coupled systems, together with the post-processing are suitable for generating bit-streams which pass all the NIST tests for randomness. The electronic circuit is built up by unidirectionally coupling three two-component (one active and one passive) oscillators in a ring configuration through resistances. It turns out that, with such a coupling, high chaotic signals can be obtained. By extracting points at fixed interval of 10 ns (corresponding to a bit rate of 100 Mb/s) on such chaotic signals, each point being simultaneously converted in 16-bits (or 8-bits), we find that the binary sequence constructed by including the 10(or 2) least significant bits pass statistical tests of randomness, meaning that bit-streams with random properties can be achieved with an overall bit rate up to 10×100 Mb/s =1Gbit/s (or 2×100 Mb/s =200 Megabit/s). Moreover, by varying the bias voltages, we also investigate the parameter range for which more complex signals can be obtained. Besides being simple to implement, the two-component electronic circuit setup is very cheap as compared to optical and electro-optical systems.
Designing an efficient LT-code with unequal error protection for image transmission
S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.
2015-10-01
The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression
Rao, T. R. N.; Seetharaman, G.; Feng, G. L.
1996-01-01
With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory
Error resilient image transmission based on virtual SPIHT
Liu, Rongke; He, Jie; Zhang, Xiaolin
2007-02-01
SPIHT is one of the most efficient image compression algorithms. It had been successfully applied to a wide variety of images, such as medical and remote sensing images. However, it is highly susceptible to channel errors. A single bit error could potentially lead to decoder derailment. In this paper, we integrate new error resilient tools into wavelet coding algorithm and present an error-resilient image transmission scheme based on virtual set partitioning in hierarchical trees (SPIHT), EREC and self truncation mechanism. After wavelet decomposition, the virtual spatial-orientation trees in the wavelet domain are individually encoded using virtual SPIHT. Since the self-similarity across sub bands is preserved, a high source coding efficiency can be achieved. The scheme is essentially a tree-based coding, thus error propagation is limited within each virtual tree. The number of virtual trees may be adjusted according to the channel conditions. When the channel is excellent, we may decrease the number of trees to further improve the compression efficiency, otherwise increase the number of trees to guarantee the error resilience to channel. EREC is also adopted to enhance the error resilience capability of the compressed bit streams. At the receiving side, the self-truncation mechanism based on self constraint of set partition trees is introduced. The decoding of any sub-tree halts in case the violation of self-constraint relationship occurs in the tree. So the bits impacted by the error propagation are limited and more likely located in the low bit-layers. In additional, inter-trees interpolation method is applied, thus some errors are compensated. Preliminary experimental results demonstrate that the proposed scheme can achieve much more benefits on error resilience.
Institute of Scientific and Technical Information of China (English)
李璐
2010-01-01
@@ 来自国际展览业巨头汉诺威公司的消息,作为全球规模最大的ICT产业盛会,2010年在德国举行的汉诺威消费电子,信息及通信博览会(CeBIT),将吸引更多中国企业参展,在这个高端舞台上,放眼国际市场寻找商机.
Performance of 1D quantum cellular automata in the presence of error
McNally, Douglas M.; Clemens, James P.
2016-09-01
This work expands a previous block-partitioned quantum cellular automata (BQCA) model proposed by Brennen and Williams [Phys. Rev. A. 68, 042311 (2003)] to incorporate physically realistic error models. These include timing errors in the form of over- and under-rotations of quantum states during computational gate sequences, stochastic phase and bit flip errors, as well as undesired two-bit interactions occurring during single-bit gate portions of an update sequence. A compensation method to counteract the undesired pairwise interactions is proposed and investigated. Each of these error models is implemented using Monte Carlo simulations for stochastic errors and modifications to the prescribed gate sequences to account for coherent over-rotations. The impact of these various errors on the function of a QCA gate sequence is evaluated using the fidelity of the final state calculated for four quantum information processing protocols of interest: state transfer, state swap, GHZ state generation, and entangled pair generation.
A single channel, 6-bit 410-MS/s 3bits/stage asynchronous SAR ADC based on resistive DAC
Xue, Han; Qi, Wei; Huazhong, Yang; Hui, Wang
2015-05-01
This paper presents a single channel, low power 6-bit 410-MS/s asynchronous successive approximation register analog-to-digital converter (SAR ADC) for ultrawide bandwidth (UWB) communication, prototyped in a SMIC 65-nm process. Based on the 3 bits/stage structure, resistive DAC, and the modified asynchronous successive approximation register control logic, the proposed ADC attains a peak spurious-free dynamic range (SFDR) of 41.95 dB, and a signal-to-noise and distortion ratio (SNDR) of 28.52 dB for 370 MS/s. At the sampling rate of 410 MS/s, this design still performs well with a 40.71-dB SFDR and 30.02-dB SNDR. A four-input dynamic comparator is designed so as to decrease the power consumption. The measurement results indicate that this SAR ADC consumes 2.03 mW, corresponding to a figure of merit of 189.17 fJ/step at 410 MS/s. Project supported by the National Science Foundation for Young Scientists of China (No. 61306029) and the National High Technology Research and Development Program of China (No. 2013AA014103).
A single channel, 6-bit 410-MS/s 3bits/stage asynchronous SAR ADC based on resistive DAC
International Nuclear Information System (INIS)
This paper presents a single channel, low power 6-bit 410-MS/s asynchronous successive approximation register analog-to-digital converter (SAR ADC) for ultrawide bandwidth (UWB) communication, prototyped in a SMIC 65-nm process. Based on the 3 bits/stage structure, resistive DAC, and the modified asynchronous successive approximation register control logic, the proposed ADC attains a peak spurious-free dynamic range (SFDR) of 41.95 dB, and a signal-to-noise and distortion ratio (SNDR) of 28.52 dB for 370 MS/s. At the sampling rate of 410 MS/s, this design still performs well with a 40.71-dB SFDR and 30.02-dB SNDR. A four-input dynamic comparator is designed so as to decrease the power consumption. The measurement results indicate that this SAR ADC consumes 2.03 mW, corresponding to a figure of merit of 189.17 fJ/step at 410 MS/s. (paper)
Development and characterisation of FPGA modems using forward error correction for FSOC
Mudge, Kerry A.; Grant, Kenneth J.; Clare, Bradley A.; Biggs, Colin L.; Cowley, William G.; Manning, Sean; Lechner, Gottfried
2016-05-01
In this paper we report on the performance of a free-space optical communications (FSOC) modem implemented in FPGA, with data rate variable up to 60 Mbps. To combat the effects of atmospheric scintillation, a 7/8 rate low density parity check (LDPC) forward error correction is implemented along with custom bit and frame synchronisation and a variable length interleaver. We report on the systematic performance evaluation of an optical communications link employing the FPGA modems using a laboratory test-bed to simulate the effects of atmospheric turbulence. Log-normal fading is imposed onto the transmitted free-space beam using a custom LabVIEW program and an acoustic-optic modulator. The scintillation index, transmitted optical power and the scintillation bandwidth can all be independently varied allowing testing over a wide range of optical channel conditions. In particular, bit-error-ratio (BER) performance for different interleaver lengths is investigated as a function of the scintillation bandwidth. The laboratory results are compared to field measurements over 1.5km.
Where the "it from bit" come from?
Foschini, Luigi
2013-01-01
In his 1989 essay, John Archibald Wheeler has tried to answer the eternal question of existence. He did it by searching for links between information, physics, and quanta. The main concept emerging from his essay is that "every physical quantity, every it, derives its ultimate significance from bits, binary yes-or-no indications". This concept has been summarized in the catchphrase "it from bit". In the Wheeler's essay, it is possible to read several times the echoes of the philosophy of Niels Bohr. The Danish physicist has pointed out how the quantum and relativistic physics - forcing us to abandon the anchor of the visual reference of common sense - have imposed a greater attention to the language. Bohr did not deny the physical reality, but recognizes that there is always need of a language no matter what a person wants to do. To put it as Carlo Sini, language is the first toolbox that man has at hands to analyze the experience. It is not a thought translated into words, because to think is to operate with...
Deep Diving into BitTorrent Locality
Cuevas, Ruben; Yang, Xiaoyuan; Siganos, Georgos; Rodriguez, Pablo
2009-01-01
Localizing BitTorrent traffic within an ISP in order to avoid excessive and often times unnecessary transit costs has recently received a lot of attention. Most existing work has focused on exploring the design space between bilateral cooperation schemes that require ISPs and P2P applications to talk to each other, and unilateral (client- or ISP-only) solutions that do not require cooperation. The above proposals have been evaluated in a hand full of ISPs with encouraging initial results. In this work we delve into the details of locality and attempt to answer yet unanswered questions like "\\emph{what are the boundaries of win-win outcomes for both ISPs and users from locality?}", "\\emph{what does the tradeoff between ISPs and users look like?}", and "\\emph{are some ISPs more in need of locality biasing than others?}". To answer the above questions we have conducted a large scale measurement study of BitTorrent demand demographics spanning 100K torrents with more than 3.5M clients at 9K ASes. We have also dev...
Compact FPGA-based beamformer using oversampled 1-bit A/D converters
DEFF Research Database (Denmark)
Tomov, Borislav Gueorguiev; Jensen, Jørgen Arendt
2005-01-01
A compact medical ultrasound beamformer architecture that uses oversampled 1-bit analog-to-digital (A/D) converters is presented. Sparse sample processing is used, as the echo signal for the image lines is reconstructed in 512 equidistant focal points along the line through its in-phase and quadr......A compact medical ultrasound beamformer architecture that uses oversampled 1-bit analog-to-digital (A/D) converters is presented. Sparse sample processing is used, as the echo signal for the image lines is reconstructed in 512 equidistant focal points along the line through its in......-phase and quadrature components. That information is sufficient for presenting a B-mode image and creating a color flow map. The high sampling rate provides the necessary delay resolution for the focusing. The low channel data width (1-bit) makes it possible to construct a compact beamformer logic. The signal...
Institute of Scientific and Technical Information of China (English)
孙宇; 李纯莲
2014-01-01
This paper analyzes the spelling efficiency of Chinese Braille, considering that it is correlative with spelling speed and accu-racy. The existence of spelling error tolerance rate of Braille scheme is of objective necessity due to its own characteristics. Based on certain spelling error tolerance rate, according to the appropriate rules of tone-marking and word-producing, it presents ideas of impro-ving the existing Braille scheme and calculating method of spelling error tolerance rate for the improved Braille scheme.%分析了盲文拼读效率，认为其与两个不能孤立看待的因素拼读速度和拼读准确率相关。但由于汉语盲文自身的特点，盲文方案的拼读容错率的存在具有客观必然性。基于一定的拼读容错率、按照适当的标调及定字规则，本文给出了改进现行盲文方案的具体思路及改进后方案的拼读容错率计算方法。
Institute of Scientific and Technical Information of China (English)
张临宏; 陈海涛; 唐丽蓉
2015-01-01
objectiveTo carry out the quality control circle activities to reduce outpatient pharmacy dispensing error rate, improve the quality of pharmacy service.Methodsthe QCC application in outpatient pharmacy quality control, analyze the related factors and research the preventive measures .Results The dispensing error rate dropped 29.73% after carrying out the quality control circle activity.ConclusionThe QCC activities can not only reduce the outpatient pharmacy dispensing error rate but also can increase the team cooperation ,the mutual communication and coordination and improve the working enthusiasm.%目的：降低门诊药房药品调剂内差，提升药房服务质量。方法：将品管圈活动应用于门诊药房药品调剂质量控制，分析调剂内差构成比及原因，制订相应对策。结果：调剂差错率在开展品管圈活动后下降了29.73%。结论：品管圈活动的开展不仅能够降低门诊药房发药差错率而且能够增加团队协作能力与工作人员之间的相互沟通协调能力，提高工作积极性。
Space telemetry degradation due to Manchester data asymmetry induced carrier tracking phase error
Nguyen, Tien M.
1991-01-01
The deleterious effects that the Manchester (or Bi-phi) data asymmetry has on the performance of phase-modulated residual carrier communication systems are analyzed. Expressions for the power spectral density of an asymmetric Manchester data stream, the interference-to-carrier signal power ratio (I/C), and the error probability performance are derived. Since data asymmetry can cause undesired spectral components at the carrier frequency, the I/C ratio is given as a function of both the data asymmetry and the telemetry modulation index. Also presented are the data asymmetry and asymmetry-induced carrier tracking loop and the system bit-error rate to various parameters of the models.
On the Performance of Multihop Heterodyne FSO Systems With Pointing Errors
Zedini, Emna
2015-03-30
This paper reports the end-to-end performance analysis of a multihop free-space optical system with amplify-and-forward (AF) channel-state-information (CSI)-assisted or fixed-gain relays using heterodyne detection over Gamma–Gamma turbulence fading with pointing error impairments. In particular, we derive new closed-form results for the average bit error rate (BER) of a variety of binary modulation schemes and the ergodic capacity in terms of the Meijer\\'s G function. We then offer new accurate asymptotic results for the average BER and the ergodic capacity at high SNR values in terms of simple elementary functions. For the capacity, novel asymptotic results at low and high average SNR regimes are also obtained via an alternative moments-based approach. All analytical results are verified via computer-based Monte-Carlo simulations.
Institute of Scientific and Technical Information of China (English)
Han-jie MA; Fan ZHOU; Rong-xin JIANG; Yao-wu CHEN
2009-01-01
We propose a novel prioritized intra refresh method for the wireless video communication. The proposed method considers the characteristics of the human visual system, the error-sensitivity of the bitstream, and the state of the time-varying wireless channel jointly. An expected perceptual distortion model was used to adjust the intra refresh rate adaptively. This model consists of the perceptual weight map based on an attention model, the bit error probability map based on bitstream size, and the dynamic channel state information (CSI). Experimental results indicate that, compared with other intra refresh methods that consider only the content of the video or the CSI, the proposed method improves the average peak signal-to-noise ratio (PSNR) of the whole flame by about 0.5 dB, and improves the average PSNR of the attention-area by about 0.8 dB.
The Influence of Gaussian Signaling Approximation on Error Performance in Cellular Networks
Afify, Laila H.
2015-08-18
Stochastic geometry analysis for cellular networks is mostly limited to outage probability and ergodic rate, which abstracts many important wireless communication aspects. Recently, a novel technique based on the Equivalent-in-Distribution (EiD) approach is proposed to extend the analysis to capture these metrics and analyze bit error probability (BEP) and symbol error probability (SEP). However, the EiD approach considerably increases the complexity of the analysis. In this paper, we propose an approximate yet accurate framework, that is also able to capture fine wireless communication details similar to the EiD approach, but with simpler analysis. The proposed methodology is verified against the exact EiD analysis in both downlink and uplink cellular networks scenarios.
Institute of Scientific and Technical Information of China (English)
刘宇; 路永乐; 曾燎燎; 杜晓鹏; 黎蕾蕾; 潘英俊
2011-01-01
Types of machining error of vibration beam rate sensor and their influence degree on performance are studied. This paper starts with the dynamics analysis and the mechanism of error generation. The error of gyro's output signal caused by the technology limit is deduced theoretically, and numerical calculations are carried out for 100 mm alloyed vibration beam. Analysis results show that quadrature error is the main source error of the gyro performance, and the verticality between drive axis and sensing axis is the primary factor affecting quadrature error; the error caused by mass center deviation of vibration beam changes with angular acceleration, which can be ignored under the condition that the angular acceleration is small; the influence of parasitic coriolis error on performance is minimal, and can be neglected. Finally, the quadrature error is simulated when the deviation angle between drive axis and sensing axis in the vertical direction is 0.626 0°, which shows that the error between quadrature error and theoretical equivalent value(the output signal when the gyroscope's rotation velocity is 2π°/s)is 4.88％. The simulation results agree with the theoretical deduction results, thus proving the effectiveness of improving the performance of vibration beam rate sensor by improving the manufacturing process and reducing the verticality between drive axis and sensing axis.%研究固态振梁型角速率传感器加工误差的类型和对性能的影响程度.从动力学特性和误差产生的机理入手,针对因加工水平限制传感器而产生的输出误差进行理论推导,并结合100 mm合金振梁完成数值计算,得出正交误差是影响陀螺输出性能的主要因素误差源,而驱动轴同敏感轴的垂直度是影响正交误差的主要因素;经计算质心偏移误差随着角加速度值变化而变化,在角加速度较小的情况下可以忽略;寄生哥氏力误差对陀螺性能的影响最小.对驱动轴同敏感
A Fast Dynamic 64-bit Comparator with Small Transistor Count
Directory of Open Access Journals (Sweden)
Chua-Chin Wang
2002-01-01
Full Text Available In this paper, we propose a 64-bit fast dynamic CMOS comparator with small transistor count. Major features of the proposed comparator are the rearrangement and re-ordering of transistors in the evaluation block of a dynamic cell, and the insertion of a weak n feedback inverter, which helps the pull-down operation to ground. The simulation results given by pre-layout tools, e.g. HSPICE, and post-layout tools, e.g. TimeMill, reveal that the delay is around 2.5 ns while the operating clock rate reaches 100 MHz. A physical chip is fabricated to verify the correctness of our design by using UMC (United Microelectronics Company 0.5 μm (2P2M technology.
Branciard, C; Lütkenhaus, N; Scarani, V; Branciard, Cyril; Gisin, Nicolas; Lutkenhaus, Norbert; Scarani, Valerio
2006-01-01
This is a study of the security of the Coherent One-Way (COW) protocol for quantum cryptography, proposed recently as a simple and fast experimental scheme. In the zero-error regime, the eavesdropper Eve can only take advantage of the losses in the transmission. We consider new attacks, based on unambiguous state discrimination, which perform better than the basic beam-splitting attack, but which can be detected by a careful analysis of the detection statistics. These results stress the importance of testing several statistical parameters in order to achieve higher rates of secret bits.
Interpreting Cross-correlations of One-bit Filtered Seismic Noise
Hanasoge, Shravan
2013-01-01
Seismic noise, generated by oceanic microseisms and other sources, illuminates the crust in a manner different from tectonic sources, and therefore provides independent information. The primary measurable is the two-point cross-correlation, evaluated using traces recorded at a pair of seismometers over a finite-time interval. However, raw seismic traces contain intermittent large-amplitude perturbations arising from tectonic activity and instrumental errors, which may corrupt the estimated cross-correlations of microseismic fluctuations. In order to diminish the impact of these perturbations, the recorded traces are filtered using the nonlinear one-bit digitizer, which replaces the measurement by its sign. Previous theory shows that for stationary Gaussian-distributed seismic noise fluctuations one-bit and raw correlation functions are related by a simple invertible transformation. Here we extend this to show that the simple correspondence between these two correlation techniques remains valid for {\\it non-st...
Numerical optimization of writer and media for bit patterned magnetic recording
Kovacs, A; Schabes, M E; Schrefl, T
2016-01-01
In this work we present a micromagnetic study of the performance potential of bit-patterned (BP) magnetic recording media via joint optimization of the design of the media and of the magnetic write heads. Because the design space is large and complex, we developed a novel computational framework suitable for parallel implementation on compute clusters. Our technique combines advanced global optimization algorithms and finite-element micromagnetic solvers. Targeting data bit densities of $4\\mathrm{Tb}/\\mathrm{in}^2$, we optimize designs for centered, staggered, and shingled BP writing. The magnetization dynamics of the switching of the exchange-coupled composite BP islands of the media is treated micromagnetically. Our simulation framework takes into account not only the dynamics of on-track errors but also of the thermally induced adjacent-track erasure. With co-optimized write heads, the results show superior performance of shingled BP magnetic recording where we identify two particular designs achieving wri...
Institute of Scientific and Technical Information of China (English)
Zhu Xiaoshi; Chen Chixiao; Xu Jialiang; Ye Fan; Ren Junyan
2013-01-01
A sampling switch with an embedded digital-to-skew converter (DSC) is presented.The proposed switch eliminates time-interleaved ADCs' skews by adjusting the boosted voltage.A similar bridged capacitors' charge sharing structure is used to minimize the area.The circuit is fabricated in a 0.18μm CMOS process and achieves sub-1 ps resolution and 200 ps timing range at a rate of 100 MS/s.The power consumption is 430 μW at maximum.The measurement result also includes a 2-channel 14-bit 100 MS/s time-interleaved ADCs (TI-ADCs) with the proposed DSC switch's demonstration.This scheme is widely applicable for the clock skew and aperture error calibration demanded in TI-ADCs and SHA-less ADCs.
International Nuclear Information System (INIS)
A sampling switch with an embedded digital-to-skew converter (DSC) is presented. The proposed switch eliminates time-interleaved ADCs' skews by adjusting the boosted voltage. A similar bridged capacitors' charge sharing structure is used to minimize the area. The circuit is fabricated in a 0.18 μm CMOS process and achieves sub-1 ps resolution and 200 ps timing range at a rate of 100 MS/s. The power consumption is 430 μW at maximum. The measurement result also includes a 2-channel 14-bit 100 MS/s time-interleaved ADCs (TI-ADCs) with the proposed DSC switch's demonstration. This scheme is widely applicable for the clock skew and aperture error calibration demanded in TI-ADCs and SHA-less ADCs. (semiconductor integrated circuits)
Efficient Algorithms for Optimal 4-Bit Reversible Logic System Synthesis
Directory of Open Access Journals (Sweden)
Zhiqiang Li
2013-01-01
Full Text Available Owing to the exponential nature of the memory and run-time complexity, many methods can only synthesize 3-bit reversible circuits and cannot synthesize 4-bit reversible circuits well. We mainly absorb the ideas of our 3-bit synthesis algorithms based on hash table and present the efficient algorithms which can construct almost all optimal 4-bit reversible logic circuits with many types of gates and at mini-length cost based on constructing the shortest coding and the specific topological compression; thus, the lossless compression ratio of the space of n-bit circuits reaches near 2×n!. This paper presents the first work to create all 3120218828 optimal 4-bit reversible circuits with up to 8 gates for the CNT (Controlled-NOT gate, NOT gate, and Toffoli gate library, and it can quickly achieve 16 steps through specific cascading created circuits.
Temperature-compensated 8-bit column driver for AMLCD
Dingwall, Andrew G. F.; Lin, Mark L.
1995-06-01
An all-digital, 5 V input, 50 Mhz bandwidth, 10-bit resolution, 128- column, AMLCD column driver IC has been designed and tested. The 10-bit design can enhance display definition over 6-bit nd 8-bit column drivers. Precision is realized with on-chip, switched-capacitor DACs plus transparently auto-offset-calibrated, opamp outputs. Increased resolution permits multiple 10-bit digital gamma remappings in EPROMs over temperature. Driver IC features include externally programmable number of output column, bi-directional digital data shifting, user- defined row/column/pixel/frame inversion, power management, timing control for daisy-chained column drivers, and digital bit inversion. The architecture uses fewer reference power supplies.
Hill Cipher and Least Significant Bit for Image Messaging Security
Directory of Open Access Journals (Sweden)
Muhammad Husnul Arif
2016-02-01
Full Text Available Exchange of information through cyberspace has many benefits as an example fast estimated time, unlimited physical distance and space limits, etc. But in these activities can also pose a security risk for confidential information. It is necessary for the safety that can be used to protect data transmitted through the Internet. Encryption algorithm that used to encrypt message to be sent (plaintext into messages that have been randomized (ciphertext is cryptography and steganography algorithms. In application of cryptographic techniques that will be used is Hill Cipher. The technique is combined with steganography techniques Least Significant Bit. The result of merging techniques can maintain the confidentiality of messages because people who do not know the secret key used will be difficult to get the message contained in the stego-image and the image that has been inserted can not be used as a cover image. Message successfully inserted and extracted back on all samples with a good image formats * .bmp, * .png , * .jpg at a resolution of 512 x 512 pixels , 256 x 256 pixels. MSE and PSNR results are not influenced file format or file size, but influenced by dimensions of image. The larger dimensions of the image, then the smaller MSE that means error of image gets smaller.
Constellation labeling optimization for bit-interleaved coded APSK
Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe
2016-05-01
This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.
Development and testing of a Mudjet-augmented PDC bit.
Energy Technology Data Exchange (ETDEWEB)
Black, Alan (TerraTek, Inc.); Chahine, Georges (DynaFlow, Inc.); Raymond, David Wayne; Matthews, Oliver (Security DBS); Grossman, James W.; Bertagnolli, Ken (US Synthetic); Vail, Michael (US Synthetic)
2006-01-01
This report describes a project to develop technology to integrate passively pulsating, cavitating nozzles within Polycrystalline Diamond Compact (PDC) bits for use with conventional rig pressures to improve the rock-cutting process in geothermal formations. The hydraulic horsepower on a conventional drill rig is significantly greater than that delivered to the rock through bit rotation. This project seeks to leverage this hydraulic resource to extend PDC bits to geothermal drilling.
Information Hiding Using Least Significant Bit Steganography and Cryptography
Shailender Gupta; Ankur Goyal; Bharat Bhushan
2012-01-01
Steganalysis is the art of detecting the message's existence and blockading the covert communication. Various steganography techniques have been proposed in literature. The Least Significant Bit (LSB) steganography is one such technique in which least significant bit of the image is replaced with data bit. As this method is vulnerable to steganalysis so as to make it more secure we encrypt the raw data before embedding it in the image. Though the encryption process increases the time complexi...
BitTorrent Swarm Analysis through Automation and Enhanced Logging
R˘azvan Deaconescu; Marius Sandu-Popa; Adriana Dr˘aghici; Nicolae T˘apus
2011-01-01
Peer-to-Peer protocols currently form the most heavily used protocol class in the Internet, with BitTorrent, the most popular protocol for content distribution, as its flagship. A high number of studies and investigations have been undertaken to measure, analyse and improve the inner workings of the BitTorrent protocol. Approaches such as tracker message analysis, network probing and packet sniffing have been deployed to understand and enhance BitTorrent's internal behaviour. In this paper we...
CTracker : a Distributed BitTorrent Tracker Based on Chimera
Jimenez, Raúl; Knutsson, Björn
2008-01-01
There are three major open issues in the BitTorrent peer discovery system, which are not solved by any of the currently deployed solutions. These issues seriously threaten BitTorrent's scalability, especially when considering that mainstream content distributors could start using BitTorrent for distributing content to millions of users simultaneously in the near future. In this paper these issues are addressed by proposing a topology-aware distributed tracking system as a replacement for both...
Bit-Optimal Lempel-Ziv compression
Ferragina, Paolo; Venturini, Rossano
2008-01-01
One of the most famous and investigated lossless data-compression scheme is the one introduced by Lempel and Ziv about 40 years ago. This compression scheme is known as "dictionary-based compression" and consists of squeezing an input string by replacing some of its substrings with (shorter) codewords which are actually pointers to a dictionary of phrases built as the string is processed. Surprisingly enough, although many fundamental results are nowadays known about upper bounds on the speed and effectiveness of this compression process and references therein), ``we are not aware of any parsing scheme that achieves optimality when the LZ77-dictionary is in use under any constraint on the codewords other than being of equal length'' [N. Rajpoot and C. Sahinalp. Handbook of Lossless Data Compression, chapter Dictionary-based data compression. Academic Press, 2002. pag. 159]. Here optimality means to achieve the minimum number of bits in compressing each individual input string, without any assumption on its ge...
Second quantization in bit-string physics
Noyes, H. Pierre
1993-01-01
Using a new fundamental theory based on bit-strings, a finite and discrete version of the solutions of the free one particle Dirac equation as segmented trajectories with steps of length h/mc along the forward and backward light cones executed at velocity +/- c are derived. Interpreting the statistical fluctuations which cause the bends in these segmented trajectories as emission and absorption of radiation, these solutions are analogous to a fermion propagator in a second quantized theory. This allows us to interpret the mass parameter in the step length as the physical mass of the free particle. The radiation in interaction with it has the usual harmonic oscillator structure of a second quantized theory. How these free particle masses can be generated gravitationally using the combinatorial hierarchy sequence (3,10,137,2(sup 127) + 136), and some of the predictive consequences are sketched.
Single Abrikosov vortices as quantized information bits
Golod, T.; Iovan, A.; Krasnov, V. M.
2015-10-01
Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex.
Single Abrikosov vortices as quantized information bits.
Golod, T; Iovan, A; Krasnov, V M
2015-01-01
Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex.
PDC (polycrystalline diamond compact) bit research at Sandia National Laboratories
Energy Technology Data Exchange (ETDEWEB)
Finger, J.T.; Glowka, D.A.
1989-06-01
From the beginning of the geothermal development program, Sandia has performed and supported research into polycrystalline diamond compact (PDC) bits. These bits are attractive because they are intrinsically efficient in their cutting action (shearing, rather than crushing) and they have no moving parts (eliminating the problems of high-temperature lubricants, bearings, and seals.) This report is a summary description of the analytical and experimental work done by Sandia and our contractors. It describes analysis and laboratory tests of individual cutters and complete bits, as well as full-scale field tests of prototype and commercial bits. The report includes a bibliography of documents giving more detailed information on these topics. 26 refs.
Zhenhai, Chen; Songren, Huang; Hong, Zhang; Zongguang, Yu; Huicai, Ji
2013-03-01
A low power 10-bit 125-MSPS charge-domain (CD) pipelined analog-to-digital converter (ADC) based on MOS bucket-brigade devices (BBDs) is presented. A PVT insensitive boosted charge transfer (BCT) that is able to reject the charge error induced by PVT variations is proposed. With the proposed BCT, the common mode charge control circuit can be eliminated in the CD pipelined ADC and the system complexity is reduced remarkably. The prototype ADC based on the proposed BCT is realized in a 0.18 μm CMOS process, with power consumption of only 27 mW at 1.8-V supply and active die area of 1.04 mm2. The prototype ADC achieves a spurious free dynamic range (SFDR) of 67.7 dB, a signal-to-noise ratio (SNDR) of 57.3 dB, and an effective number of bits (ENOB) of 9.0 for a 3.79 MHz input at full sampling rate. The measured differential nonlinearity (DNL) and integral nonlinearity (INL) are +0.5/-0.3 LSB and +0.7/-0.55 LSB, respectively.
Institute of Scientific and Technical Information of China (English)
Chen Zhenhai; Huang Songren; Zhang Hong; Yu Zongguang; Ji Huicai
2013-01-01
A low power 10-bit 125-MSPS charge-domain (CD) pipelined analog-to-digital converter (ADC) based on MOS bucket-brigade devices (BBDs) is presented.A PVT insensitive boosted charge transfer (BCT) that is able to reject the charge error induced by PVT variations is proposed.With the proposed BCT,the common mode charge control circuit can be eliminated in the CD pipelined ADC and the system complexity is reduced remarkably.The prototype ADC based on the proposed BCT is realized in a 0.18 μm CMOS process,with power consumption of only 27 mW at 1.8-V supply and active die area of 1.04 mm2.The prototype ADC achieves a spurious free dynamic range (SFDR) of 67.7 dB,a signal-to-noise ratio (SNDR) of 57.3 dB,and an effective number of bits (ENOB) of 9.0for a 3.79 MHz input at full sampling rate.The measured differential nonlinearity (DNL) and integral nonlinearity (INL) are +0.5/-0.3 LSB and +0.7/-0.55 LSB,respectively.
Suboptimal quantum-error-correcting procedure based on semidefinite programming
Yamamoto, Naoki; Hara, Shinji; Tsumura, Koji
2006-01-01
In this paper, we consider a simplified error-correcting problem: for a fixed encoding process, to find a cascade connected quantum channel such that the worst fidelity between the input and the output becomes maximum. With the use of the one-to-one parametrization of quantum channels, a procedure finding a suboptimal error-correcting channel based on a semidefinite programming is proposed. The effectiveness of our method is verified by an example of the bit-flip channel decoding.
Kanter, Ido; Butkovski, Maria; Peleg, Yitzhak; Zigzag, Meital; Aviad, Yaara; Reidler, Igor; Rosenbluh, Michael; Kinzel, Wolfgang
2010-08-16
Random bit generators (RBGs) constitute an important tool in cryptography, stochastic simulations and secure communications. The later in particular has some difficult requirements: high generation rate of unpredictable bit strings and secure key-exchange protocols over public channels. Deterministic algorithms generate pseudo-random number sequences at high rates, however, their unpredictability is limited by the very nature of their deterministic origin. Recently, physical RBGs based on chaotic semiconductor lasers were shown to exceed Gbit/s rates. Whether secure synchronization of two high rate physical RBGs is possible remains an open question. Here we propose a method, whereby two fast RBGs based on mutually coupled chaotic lasers, are synchronized. Using information theoretic analysis we demonstrate security against a powerful computational eavesdropper, capable of noiseless amplification, where all parameters are publicly known. The method is also extended to secure synchronization of a small network of three RBGs.
LOW COMPLEXITY LMMSE TURBO EQUALIZATION FOR COMBINED ERROR CONTROL CODED AND LINEARLY PRECODED OFDM
Institute of Scientific and Technical Information of China (English)
Qu Daiming; Zhu Guangxi
2006-01-01
The turbo equalization approach is studied for Orthogonal Frequency Division Multiplexing (OFDM) system with combined error control coding and linear precoding. While previous literatures employed linear precoder of small size for complexity reasons, this paper proposes to use a linear precoder of size larger than or equal to the maximum length of the equivalent discrete-time channel in order to achieve full frequency diversity and reduce complexities of the error control coder/decoder. Also a low complexity Linear Minimum Mean Square Error (LMMSE) turbo equalizer is derived for the receiver. Through simulation and performance analysis, it is shown that the performance of the proposed scheme over frequency selective fading channel reaches the matched filter bound; compared with the same coded OFDM without linear precoding, the proposed scheme shows an Signal-to-Noise Ratio (SNR) improvement of at least 6dB at a bit error rate of 10-6 over a multipath channel with exponential power delay profile. Convergence behavior of the proposed scheme with turbo equalization using various type of linear precoder/transformer, various interleaver size and error control coder of various constraint length is also investigated.
Error and its meaning in forensic science.
Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M
2014-01-01
The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.
Richard, Thomas; Germay, Christophe; Detournay, Emmanuel
2007-08-01
In this paper a study of the self-excited stick-slip oscillations of a rotary drilling system with a drag bit, using a discrete model that takes into consideration the axial and torsional vibration modes of the system, is described. Coupling between these two vibration modes takes place through a bit-rock interaction law, which accounts for both the frictional contact and the cutting processes. The cutting process introduces a delay in the equations of motion that is responsible for the existence of self-excited vibrations, which can degenerate into stick-slip oscillations and/or bit bouncing under certain conditions. From analysis of this new model it is concluded that the experimentally observed decrease of the reacting torque with the angular velocity is actually an expression of the system response, rather than an intrinsic rate dependence of the interface laws between the rock and the drill bit, as is commonly assumed.
A simple tool to assess the cost-effectiveness of new bit technology
Energy Technology Data Exchange (ETDEWEB)
Bomber, T.M.; Pierce, K.G. [Sandia National Labs., Albuquerque, NM (United States); Livesay, B.J. [Livesay Consultants, Encinitas, CA (United States)
1998-07-01
Cost or performance targets for new bit technologies can be established with the aid of a drilling cost model. In this paper the authors make simplifying assumptions in a detailed drilling cost model that reduce the comparison of two technologies to a linear function of relative cost and performance parameters. This simple model, or analysis tool, is not intended to provide absolute well cost but is intended to compare the relative costs of different methods or technologies to accomplish the same drilling task. Comparing the simplified model to the detailed well cost model shows that the simple linear cost model provides a very efficient tool for screening certain new drilling methods, techniques, and technologies based on economic value. This tool can be used to divide the space defined by the set of parameters: bit cost, bit life, rate of penetration, and operational cost into two areas with a linear boundary. The set of all the operating points in one area will result in an economic advantage in drilling the well with the new technology, while any set of operating points in the other area indicates that any economic advantage is either questionable or does not exist. In addition, examining the model results can develop insights into the economics associated with bit performance, life, and cost. This paper includes development of the model, examples of employing the model to develop should cost or should perform goals for new bit technologies, a discussion of the economic insights in terms of bit cost and performance, and an illustration of the consequences when the basic assumptions are violated.
BitPredator: A Discovery Algorithm for BitTorrent Initial Seeders and Peers
Energy Technology Data Exchange (ETDEWEB)
Borges, Raymond [West Virginia University; Patton, Robert M [ORNL; Kettani, Houssain [Polytechnic University of Puerto Rico (PUPR); Masalmah, Yahya [Universidad del Turabo
2011-01-01
There is a large amount of illegal content being replicated through peer-to-peer (P2P) networks where BitTorrent is dominant; therefore, a framework to profile and police it is needed. The goal of this work is to explore the behavior of initial seeds and highly active peers to develop techniques to correctly identify them. We intend to establish a new methodology and software framework for profiling BitTorrent peers. This involves three steps: crawling torrent indexers for keywords in recently added torrents using Really Simple Syndication protocol (RSS), querying torrent trackers for peer list data and verifying Internet Protocol (IP) addresses from peer lists. We verify IPs using active monitoring methods. Peer behavior is evaluated and modeled using bitfield message responses. We also design a tool to profile worldwide file distribution by mapping IP-to-geolocation and linking to WHOIS server information in Google Earth.
Directory of Open Access Journals (Sweden)
Tao Lyu
2014-11-01
Full Text Available A 12-bit high-speed column-parallel two-step single-slope (SS analog-to-digital converter (ADC for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC’s linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors.
Lyu, Tao; Yao, Suying; Nie, Kaiming; Xu, Jiangtao
2014-01-01
A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC's linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI) CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors. PMID:25407903
减少砂砾石料场储量计算的误差率%Reduction of Error Rate in Reserve Calculation of Gravel Quarry
Institute of Scientific and Technical Information of China (English)
向能武; 肖东佑; 司马世华
2014-01-01
Aggregate quarry reserves survey accuracy of Pakistan Karot Hydropower Station Project directly affects investment decisions of construction projects .Project department established QC team for improving original investigation techniques, reducing calculation error , and providing accurate data for project design .The team conducts standard management on tackling results .‘CATIA V5 based Three dimensional Geological Quarry Modeling Technique ’ is revised and approved by experts , thereby ensuring investigation and calculation accuracy of similar project quarry .%巴基斯坦Karot水电站工程骨料场储量勘测精准度，直接影响工程建设的投资决策。项目部成立QC小组，改进原勘测技术，减少计算误差，为工程设计提供了精确数据。小组对攻关成果进行标准化管理，重新修订“基于CATIA V5的三维地质料场建模技术”并通过专家评审，从而保证了类似工程料场的勘测计算精度。
Reduction of Error Rate in Reserve Calculation of Gravel Quarry%减少砂砾石料场储量计算的误差率
Institute of Scientific and Technical Information of China (English)
向能武; 肖东佑; 司马世华
2014-01-01
Aggregate quarry reserves survey accuracy of Pakistan Karot Hydropower Station Project directly affects investment decisions of construction projects .Project department established QC team for improving original investigation techniques, reducing calculation error , and providing accurate data for project design .The team conducts standard management on tackling results .‘CATIA V5 based Three dimensional Geological Quarry Modeling Technique ’ is revised and approved by experts , thereby ensuring investigation and calculation accuracy of similar project quarry .%巴基斯坦Karot水电站工程骨料场储量勘测精准度，直接影响工程建设的投资决策。项目部成立QC小组，改进原勘测技术，减少计算误差，为工程设计提供了精确数据。小组对攻关成果进行标准化管理，重新修订“基于CATIA V5的三维地质料场建模技术”并通过专家评审，从而保证了类似工程料场的勘测计算精度。
Medication errors: prescribing faults and prescription errors
Velo, Giampaolo P; Minuz, Pietro
2009-01-01
Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients.Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common.Inadequate knowledge or competence and ...
Directory of Open Access Journals (Sweden)
K. K. L. B. Adikaram
2014-01-01
Full Text Available With the increasing demand for online/inline data processing efficient Fourier analysis becomes more and more relevant. Due to the fact that the bit reversal process requires considerable processing time of the Fast Fourier Transform (FFT algorithm, it is vital to optimize the bit reversal algorithm (BRA. This paper is to introduce an efficient BRA with multiple memory structures. In 2009, Elster showed the relation between the first and the second halves of the bit reversal permutation (BRP and stated that it may cause serious impact on cache performance of the computer, if implemented. We found exceptions, especially when the said index mapping was implemented with multiple one-dimensional memory structures instead of multidimensional or one-dimensional memory structure. Also we found a new index mapping, even after the recursive splitting of BRP into equal sized slots. The four-array and the four-vector versions of BRA with new index mapping reported 34% and 16% improvement in performance in relation to similar versions of Linear BRA of Elster which uses single one-dimensional memory structure.
Energy Technology Data Exchange (ETDEWEB)
Beddo, M.E.; Spinka, H.; Underwood, D.G.
1992-08-14
Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.
Institute of Scientific and Technical Information of China (English)
徐光甫; 李洪霞
2004-01-01
研究了具有定常人为故障率(human error rates)和通常故障率(common-cause failure rates),修复时间任意分布的可修复系统的数学模型.首先将此系统转换为Banach空间下的Volterra积分方程,得到了系统非负解的存在性和唯一性结果.
Support research for development of improved geothermal drill bits
Energy Technology Data Exchange (ETDEWEB)
Hendrickson, R.R.; Barker, L.M.; Green, S.J.; Winzenried, R.W.
1977-06-01
Progress in background research needed to develop drill bits for the geothermal environment is reported. Construction of a full-scale geothermal wellbore simulator and geothermal seal testing machine was completed. Simulated tests were conducted on full-scale bits. Screening tests on elastometric seals under geothermal conditions are reported. (JGB)
Gradient Descent Bit Flipping Algorithms for Decoding LDPC Codes
Wadayama, Tadashi; Nakamura, Keisuke; Yagita, Masayuki; Funahashi, Yuuki; Usami, Shogo; Takumi, Ichi
2007-01-01
A novel class of bit-flipping (BF) algorithms for decoding low-density parity-check (LDPC) codes is presented. The proposed algorithms, which are called gradient descent bit flipping (GDBF) algorithms, can be regarded as simplified gradient descent algorithms. Based on gradient descent formulation, the proposed algorithms are naturally derived from a simple non-linear objective function.
Client BitTorrent intel·ligent
Torné Carné, Bernat
2015-01-01
L'objectiu més elemental que el projecte persegueix es basa en el desenvolupament d'un client BitTorrent complert i totalment funcional. A més, juntament amb aquest objectiu primari, volem desenvolupar un conjunt de funcionalitats i millores que ens diferenciïn dels clients BitTorrent ja existents.
Cross Institutional Cooperation on a Shared Bit Repository
DEFF Research Database (Denmark)
Zierau, Eld; Kejser, Ulla Bøgvad
2013-01-01
for systematically analysing institutions technical and organisational requirements for a remote bit repository. Instead of viewing a bit repository simply as Archival Storage for the institutions repositories, we argue for viewing it as consisting of a subset of functions from all entities defined by the OAIS...
Cross Institutional Cooperation on a Shared Bit Repository
DEFF Research Database (Denmark)
Zierau, Eld; Kejser, Ulla Bøgvad
2010-01-01
for systematically analysing the technical and organizational requirements of institutions for a remote bit repository. Instead of viewing a bit repository simply as Archival Storage for the institutions’ repositories, we argue for viewing it as consisting of a subset of functions from all entities defined...
APL portability in 16 bits microprocessors
International Nuclear Information System (INIS)
The present work deals with an automatic program translation method as a solution to the software portability problem. The source machine is a minicomputer of the SEMS MITRA range; the target machines are three 16 bits microprocessors: INTEL 8086, MOTOROLA 68000 and ZILOG Z-8000. The software to be translated is written in macro-assembly language (MAS) and consist of an operating system, an APL interpreter and some other software tools. The translation method uses a machine-free intermediate language describing the program in source language. This intermediate language consisting of a set of macro-instructions, is then assembled using a link library; this library defines the macro-instructions which create the target microprocessor object code. The whole translation operation work is carried out by the source machine which produces, after linkage editing, a table memory map (IME). Thereafter the load object code will be removed to target machine. Concerning optimization problems or inputs-outputs, some module can be written using the target machine assembly language and processed by a specific assembler in target machine or source machine, if the latter processes a cross-assembler; then the resulting binary codes are merged with the binary codes issued during the automatic translation phase. The method proposed here may be extended to any 16 bits computer, by a simple change of the macro-instruction library. This work allows an APL machine creation with microprocessors, preserving the original software and so maintaining its initial reliability. This work has led to a closer examination of hardware problems connected with the various target machines configurations. Difficulties met during this work mainly arise from different operations of the target machines specially indicators or flags setting, addressing modes and interruption mechanisms. This shows up the necessity to design new microprocessors either partially user's micro-programmable, or with some functions
SPAA AWARE ERROR TOLERANT 32 BIT ARITHMETIC AND LOGICAL UNIT FOR GRAPHICS PROCESSOR UNIT
Kaushal Kumar Sahu*, Nitin Jain
2016-01-01
reliability of a processor. In other word we can say ALU is the brain of a processor. Nowadays every portable devices are battery operated so primary concern of those devices are low power consumption. But at the same time we want higher performance also so that there should not be any lag while using those devices. Graphically intensive application demands more resources and at the same time demand more power. Optimization between speed of operation and power consumption is the key challenge...
A Memristor as Multi-Bit Memory: Feasibility Analysis
Directory of Open Access Journals (Sweden)
O. Bass
2015-06-01
Full Text Available The use of emerging memristor materials for advanced electrical devices such as multi-valued logic is expected to outperform today's binary logic digital technologies. We show here an example for such non-binary device with the design of a multi-bit memory. While conventional memory cells can store only 1 bit, memristors-based multi-bit cells can store more information within single device thus increasing the information storage density. Such devices can potentially utilize the non-linear resistance of memristor materials for efficient information storage. We analyze the performance of such memory devices based on their expected variations in order to determine the viability of memristor-based multi-bit memory. A design of read/write scheme and a simple model for this cell, lay grounds for full integration of memristor multi-bit memory cell.
IMAGE STEGANOGRAPHY DENGAN METODE LEAST SIGNIFICANT BIT (LSB
Directory of Open Access Journals (Sweden)
M. Miftakul Amin
2014-02-01
Full Text Available Security in delivering a secret message is an important factor in the spread of information in cyberspace. Protecting that message to be delivered to the party entitled to, should be made a message concealment mechanism. The purpose of this study was to hide a secret text message into digital images in true color 24 bit RGB format. The method used to insert a secret message using the LSB (Least Significant Bit by replacing the last bit or 8th bit in each RGB color component. RGB image file types option considering that messages can be inserted capacity greater than if use a grayscale image, this is because in one pixel can be inserted 3 bits message. Tests provide results that are hidden messages into a digital image does not reduce significantly the quality of the digital image, and the message has been hidden can be extracted again, so that messages can be delivered to the recipient safely.
Uniqueness: skews bit occurrence frequencies in randomly generated fingerprint libraries.
Chen, Nelson G
2016-08-01
Requiring that randomly generated chemical fingerprint libraries have unique fingerprints such that no two fingerprints are identical causes a systematic skew in bit occurrence frequencies, the proportion at which specified bits are set. Observed frequencies (O) at which each bit is set within the resulting libraries systematically differ from frequencies at which bits are set at fingerprint generation (E). Observed frequencies systematically skew toward 0.5, with the effect being more pronounced as library size approaches the compound space, which is the total number of unique possible fingerprints given the number of bit positions each fingerprint contains. The effect is quantified for varying library sizes as a fraction of the overall compound space, and for changes in the specified frequency E. The cause and implications for this systematic skew are subsequently discussed. When generating random libraries of chemical fingerprints, the imposition of a uniqueness requirement should either be avoided or taken into account. PMID:27230477
2010-01-01
On Monday 18 October, a little bit of legal history will be made when the first international tripartite agreement between CERN and its two Host States is signed. This agreement, which has been under negotiation since 2004, clarifies the working conditions of people employed by companies contracted to CERN. It will facilitate the management of service contracts both for CERN and its contractors. Ever since 1965, when CERN first crossed the border into France, the rule of territoriality has applied. This means that anyone working for a company contracted to CERN whose job involves crossing the border is subject to the employment legislation of both states. The new agreement simplifies matters by making only one legislation apply per contract, that of the country in which most of the work is carried out. This is good for CERN, it’s good for the companies, and it’s good for their employees. It is something that all three parties to the agreement have wanted for some time, and I...
Ergodic and Outage Performance of Fading Broadcast Channels with 1-Bit Feedback
Niu, Bo; Somekh, Oren; Haimovich, Alexander M
2010-01-01
In this paper, the ergodic sum-rate and outage probability of a downlink single-antenna channel with K users are analyzed in the presence of Rayleigh flat fading, where limited channel state information (CSI) feedback is assumed. Specifically, only 1-bit feedback per fading block per user is available at the base station. We first study the ergodic sum-rate of the 1-bit feedback scheme, and consider the impact of feedback delay on the system. A closed-form expression for the achievable ergodic sum-rate is presented as a function of the fading temporal correlation coefficient. It is proved that the sum-rate scales as loglogK, which is the same scaling law achieved by the optimal non-delayed full CSI feedback scheme. The sum-rate degradation due to outdated CSI is also evaluated in the asymptotic regimes of either large K or low SNR. The outage performance of the 1-bit feedback scheme for both instantaneous and outdated feedback is then investigated. Expressions for the outage probabilities are derived, along w...
Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.
Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik
2014-06-16
Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity. PMID:24977550
Improving Soft FEC Performance for Higher-Order Modulations by Bit Mapper Optimization
Häger, Christian; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik
2014-01-01
Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the ARJ4A protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system c...
Institute of Scientific and Technical Information of China (English)
丁洋; 王宗民; 周亮; 王瑛; 刘福海
2012-01-01
Capacitor mismatches in multi-bit sub-DAC will introduce non-linear errors in pipelined ADC output. Therefore, conversion resolution will be greatly deteriorated. But normal calibration technique cannot correct non- linear error. Based on this calibration requirement, a capacitor mismatch calibration for multi-bit sub-DAC in 16 bit pipelined ADC is presented in this paper. In this calibration, a post-production capacitor mismatch extraction method is designed. After mismatch errors are obtained, MDAC output errors of different inputs are calculated. According to these errors, calibration code are calculated and then be stored on chip to correct mismatch errors through compensation circuit. Simulation results show that after calibration, SINAI） is 93.34 dB, SFDR is 117.86 dB and ENOB increased from 12.63 bit to 15.26 bit.%多比特子DAC的电容失配误差在流水线ADC输出中引入非线性误差，不仅严重降低ADC转换精度，而且通常的校准技术无法对非线性误差进行校准．针对这种情况，本文提出了一种用于16位流水线ADC的多比特子DAC电容失配校准方法．该设计误差提取方案在流片后测试得到电容失配误差，进而计算不同输入情况下电容失配导致的MDAC输出误差，根据后级的误差补偿电路将误差转换为校准码并存储在芯片中，对电容失配导致的流水级输出误差进行校准．仿真结果表明，校准后信噪失真比SINAD为93．34dB，无杂散动态范围SFDR为117．86dB，有效精度ENOB从12．63bit提高到15．26bit．
Single-photon quantum error rejection and correction with linear optics
Kalamidas, Demetrios
2005-01-01
We present single-photon schemes for quantum error rejection and correction with linear optics. In stark contrast to other known proposals, our schemes do not require multi-photon entangled states, are not probabilistic, and their application is not restricted to single bit-flip errors.
Institute of Scientific and Technical Information of China (English)
刘冰; 高俊; 陶伟; 窦高奇
2011-01-01
针对频谱有效的多进制低密度奇偶校验(Low-Density Parity-Check,LDPC)码编码调制系统,本文提出了一种在带宽有效传输下的两级不等保护方案,两级不等错误保护分别来自码字特性和高阶调制,充分利用了码字变量节点的度和高阶调制中比特的不等可靠性.编码调制系统采用多进制LDPC码与高阶调制匹配结合,无需信息转换,针对不同误符号率和误比特率的需求,可在符号级和比特级提供不同可靠性达到不等错误保护的目的.仿真结果表明,在AWGN信道下,采用16QAM调制方式的性能优于16PSK调制方式,利用变量节点的度和高阶调制提供的信息,码字的误符号率和误比特率具有明显的不等错误保护区分度,对于重要的比特给予了较强的保护.%A two-level unequal error protection scheme of nonbinary low-density parity-check ( LDPC) codes for bandwidth-efficient coded modulation systems is proposed in order to allow bandwidth efficient transmissions. These two unequal error protection levels are originated from codeword characteristics and high order constellation, which make full use of different degrees of variable node and bit reliabilities of high order modulation. The information transformation is not needed if nonbinary LDPC code and high order modulation are matched in coded modulation systems. The systems can provide different reliabilities at symbol and bit levels to meet the requirement of symbol error rate and bit error rate for real-life system implementations. Simulation results show that the 16QAM modulation is always superior to the 16PSK modulation over an AWGN channel. The distinct unequal error protection is achieved considering the variable node degrees and high order modulation. The transmitted bits have different sensitivities to errors, and the important bits are protected strongly.
A New Error Control Mechanism for Wireless ATM／AAL2
Institute of Scientific and Technical Information of China (English)
YINShouyi; WANGJun; LINXiaokang
2003-01-01
Wireless ATIVI (asynchronous transfer mode) would be an ideal transport solution for next gener-ation wireless network. ATM would be used as switchinglayer; AAL2 (ATM ddaptation layer 2) would be employedto carry voice. In this article, a brief analysis of AAL2CPS (common part sublayer) packet loss rate over wire-less channel is presented. Because of many burst errorsover wireless links, the error-correcting capability of orig-inal ATM/AAL2 HEC is always overwhelmed so that lossrate of the ATIVI cell and CPS packet is very high. Toreduce the CPS Packet loss rate, a new ATIVI/AAL2 cellstructure is proposed. Two new methods, ATIVI cell headerbits dispersal (ACHBD) and CPS packet header concen-trated protection (CPHCP), are used in this new struc-ture. With ACHBD, all of the bits in header are interlaceduniformly in the ATM cell. Any burst errors in transmis-sion are spread out as isolated random errors in the wholecell. All the CPS packet headers in the same ATIVI cellare put together in CPHCP mechanism to be protected.Numerical results of this new structure and these two newerror-correcting methods over wireless channel show thatthe CPS packet loss rate is remarkably reduced. Thesemechanisms are effective means to improve the reliabilityof the wireless ATIVI/AAL2 network.
Bit-Interleaved Coded Multiple Beamforming with Perfect Coding
Li, Boyu
2010-01-01
When the channel state information is known by the transmitter as well as the receiver, beamforming techniques that employ Singular Value Decomposition (SVD) are commonly used in Multiple-Input Multiple-Output (MIMO) systems. Without channel coding, when a single symbol is transmitted, these systems achieve the full diversity order. Whereas, this property is lost when multiple symbols are simultaneously transmitted. Full diversity can be restored when channel coding is added, as long as the code rate Rc and the number of employed subchannels S satisfy the condition RcS =< 1. Moreover, by adding a proper constellation precoder, full diversity can be achieved for both uncoded and coded SVD systems, e.g., Fully Precoded Multiple Beamforming (FPMB) and Bit-Interleaved Coded Multiple Beamforming with Full Precoding (BICMB-FP). Perfect Space-Time Block Code (PSTBC) is a full-rate full-diversity space-time code, which achieves maximum coding gain for MIMO systems. Previously, Perfect Coded Multiple Beamforming (P...
Modern X86 assembly language programming 32-bit, 64-bit, SSE, and AVX
Kusswurm, Daniel
2014-01-01
Modern X86 Assembly Language Programming shows the fundamentals of x86 assembly language programming. It focuses on the aspects of the x86 instruction set that are most relevant to application software development. The book's structure and sample code are designed to help the reader quickly understand x86 assembly language programming and the computational capabilities of the x86 platform. Major topics of the book include the following: 32-bit core architecture, data types, internal registers, memory addressing modes, and the basic instruction setX87 core architecture, register stack, special
Macuda, Jan
2012-11-01
In Poland all lignite mines are dewatered with the use of large-diameter wells. Drilling of such wells is inefficient owing to the presence of loose Quaternary and Tertiary material and considerable dewatering of rock mass within the open pit area. Difficult geological conditions significantly elongate the time in which large-diameter dewatering wells are drilled, and various drilling complications and break-downs related to the caving may occur. Obtaining higher drilling rates in large-diameter wells can be achieved only when new cutter bits designs are worked out and rock drillability tests performed for optimum mechanical parameters of drilling technology. Those tests were performed for a bit ø 1.16 m in separated macroscopically homogeneous layers of similar drillability. Depending on the designed thickness of the drilled layer, there were determined measurement sections from 0.2 to 1.0 m long, and each of the sections was drilled at constant rotary speed and weight on bit values. Prior to drillability tests, accounting for the technical characteristic of the rig and strength of the string and the cutter bit, there were established limitations for mechanical parameters of drilling technology: P ∈ (Pmin; Pmax) n ∈ (nmin; nmax) where: Pmin; Pmax - lowest and highest values of weight on bit, nmin; nmax - lowest and highest values of rotary speed of bit, For finding the dependence of the rate of penetration on weight on bit and rotary speed of bit various regression models have been analyzed. The most satisfactory results were obtained for the exponential model illustrating the influence of weight on bit and rotary speed of bit on drilling rate. The regression coefficients and statistical parameters prove the good fit of the model to measurement data, presented in tables 4-6. The average drilling rate for a cutter bit with profiled wings has been described with the form: Vśr= Z ·Pa· nb where: Vśr- average drilling rate, Z - drillability coefficient, P
Yang, Chengen; Emre, Yunus; Cao, Yu; Chakrabarti, Chaitali
2012-12-01
Non-volatile resistive memories, such as phase-change RAM (PRAM) and spin transfer torque RAM (STT-RAM), have emerged as promising candidates because of their fast read access, high storage density, and very low standby power. Unfortunately, in scaled technologies, high storage density comes at a price of lower reliability. In this article, we first study in detail the causes of errors for PRAM and STT-RAM. We see that while for multi-level cell (MLC) PRAM, the errors are due to resistance drift, in STT-RAM they are due to process variations and variations in the device geometry. We develop error models to capture these effects and propose techniques based on tuning of circuit level parameters to mitigate some of these errors. Unfortunately for reliable memory operation, only circuit-level techniques are not sufficient and so we propose error control coding (ECC) techniques that can be used on top of circuit-level techniques. We show that for STT-RAM, a combination of voltage boosting and write pulse width adjustment at the circuit-level followed by a BCH-based ECC scheme can reduce the block failure rate (BFR) to 10-8. For MLC-PRAM, a combination of threshold resistance tuning and BCH-based product code ECC scheme can achieve the same target BFR of 10-8. The product code scheme is flexible; it allows migration to a stronger code to guarantee the same target BFR when the raw bit error rate increases with increase in the number of programming cycles.
The Economics of BitCoin Price Formation
CIAIAN PAVEL; Rajcaniova, Miroslava; KANCS D'ARTIS
2015-01-01
This is the first article that studies BitCoin price formation by considering both the traditional determinants of currency price, e.g., market forces of supply and demand, and digital currencies specific factors, e.g., BitCoin attractiveness for investors and users. The conceptual framework is based on the Barro (1979) model, from which we derive testable hypotheses. Using daily data for five years (2009–2015) and applying time-series analytical mechanisms, we find that market forces and Bit...
1-Bit Compressive Data Gathering for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Jiping Xiong
2014-01-01
Full Text Available Compressive sensing (CS has been widely used in wireless sensor networks for the purpose of reducing the data gathering communication overhead in recent years. In this paper, we firstly apply 1-bit compressive sensing to wireless sensor networks to further reduce the communication overhead that each sensor needs to send. Furthermore, we propose a novel blind 1-bit CS reconstruction algorithm which outperforms other state-of-the-art blind 1-bit CS reconstruction algorithms under the settings of WSN. Experimental results on real sensor datasets demonstrate the efficiency of our method.
An Image Encryption Method Based on Bit Plane Hiding Technology
Institute of Scientific and Technical Information of China (English)
LIU Bin; LI Zhitang; TU Hao
2006-01-01
A novel image hiding method based on the correlation analysis of bit plane is described in this paper. Firstly, based on the correlation analysis, different bit plane of a secret image is hided in different bit plane of several different open images. And then a new hiding image is acquired by a nesting "Exclusive-OR" operation on those images obtained from the first step. At last, by employing image fusion technique, the final hiding result is achieved. The experimental result shows that the method proposed in this paper is effective.
Fitness Probability Distribution of Bit-Flip Mutation.
Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique
2015-01-01
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
A Memristor as Multi-Bit Memory: Feasibility Analysis
Bass, O.; Fish, A.; Naveh, D.
2015-01-01
The use of emerging memristor materials for advanced electrical devices such as multi-valued logic is expected to outperform today's binary logic digital technologies. We show here an example for such non-binary device with the design of a multi-bit memory. While conventional memory cells can store only 1 bit, memristors-based multi-bit cells can store more information within single device thus increasing the information storage density. Such devices can potentially utilize the non-linear res...
Optimization of reverse circulation bit based on field experiment
Institute of Scientific and Technical Information of China (English)
Hong REN; Kun YIN; Kun BO
2008-01-01
By field experiment in Sandaozhuang W-Mo mining area in Luanchuan of Henan Province, the authors analyzed the Experimental result of reverse circulation bit on the basis of different structures and obtained the following conclusion: the design parameter of reverse circulation bit, the number, diameter and angle of the spurt hole can influence on the reverse circulation effect. The bit with inner spurt hole is better obviously than that one without inner spurt hole in reverse circulation, one or two right and the best choice of inner spurt hole is that the diameter is Φ8, the angle is 30° dip up and the suitable number is two to three.
Institute of Scientific and Technical Information of China (English)
韦璐; 吴凤雯; 李健萍; 梁令仪; 陈斯韵; 罗红英
2015-01-01
Objective: To improve the management awareness of pharmacy professionals and the ability of resolving issues,reduce the rate of inpatient pharmacy injection allocation errors,ensure drug safety ,and improve the quality of pharmacy service,through the implement of QCC activities in inpatient pharmacy.Methods:Through implementing"quality control circle(QCC)",we found out the main reasons of injection allocation errors and took relevant countermeasures,following the steps of PDCA circle. Results: Carrying out QCC in inpatient pharmacy , the rate of injection allocation errors is from 37.57% down to 16.53% before the event, and the working environment has been greatly improved,the attitude of the pharmacists is also improved.Conclusions: QCC activities achieves satisfactory results in errors reduction and the pharmacy management in inpatient pharmacy,raise the accuracy of allocating drug for pharmacists and patients'' satisfaction with the inpatient pharmacy.At the same time,it ensures the safety of using drugs for patients and harmonizes the relationship between pharmacists,nurses,doctors and patients.%目的:在住院药房开展降低针剂内部调配差错的品管圈活动,提升药事服务质量,提高药学专业人员的自我管理意识和解决问题的能力,保证临床用药安全有效. 方法:通过药师自发组成品管圈,找出针剂内部调配差错的主要原因,运用多种品管方法制定对策,依循"PDCA循环"开展各项活动. 结果:住院药房在开展品管圈活动之后,针剂内部调剂差错率由活动前的37.57%降到16.53%,工作环境与服务得到了改善.结论:住院药房开展品管圈活动能有效地降低住院药房针剂内部调配差错率,同时通过品管圈这种自下而上的管理方法,提高了圈员的自我管理能力以及病区对住院药房的满意度,最大限度地保障了患者的用药安全,和谐药护患关系.
Barriers to medical error reporting
Directory of Open Access Journals (Sweden)
Jalal Poorolajal
2015-01-01
Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.
Compressing molecular dynamics trajectories: breaking the one-bit-per-sample barrier
Huwald, Jan; Dittrich, Peter
2016-01-01
Molecular dynamics simulations yield large amounts of trajectory data. For their durable storage and accessibility an efficient compression algorithm is paramount. State of the art domain-specific algorithms combine quantization, Huffman encoding and occasionally domain knowledge. We propose the high resolution trajectory compression scheme (HRTC) that relies on piecewise linear functions to approximate quantized trajectories. By splitting the error budget between quantization and approximation, our approach beats the current state of the art by several orders of magnitude given the same error tolerance. It allows storing samples at far less than one bit per sample. It is simple and fast enough to be integrated into the inner simulation loop, store every time step, and become the primary representation of trajectory data.
Compressing molecular dynamics trajectories: Breaking the one-bit-per-sample barrier.
Huwald, Jan; Richter, Stephan; Ibrahim, Bashar; Dittrich, Peter
2016-07-01
Molecular dynamics simulations yield large amounts of trajectory data. For their durable storage and accessibility an efficient compression algorithm is paramount. State of the art domain-specific algorithms combine quantization, Huffman encoding and occasionally domain knowledge. We propose the high resolution trajectory compression scheme (HRTC) that relies on piecewise linear functions to approximate quantized trajectories. By splitting the error budget between quantization and approximation, our approach beats the current state of the art by several orders of magnitude given the same error tolerance. It allows storing samples at far less than one bit per sample. It is simple and fast enough to be integrated into the inner simulation loop, store every time step, and become the primary representation of trajectory data. © 2016 Wiley Periodicals, Inc. PMID:27191931
Characterization of modems and error correcting protocols using a scintillation playback system
Rabinovich, William S.; Mahon, Rita; Ferraro, Mike S.; Murphy, James L.; Moore, Christopher I.
2016-03-01
The performance of free space optical (FSO) communication systems is strongly affected by optical scintillation. Scintillation fades can cause errors when the power on a detector falls below its noise floor, while surges can overload a detector. The very long time scale of scintillation compared to a typical bit in an FSO link means that error-correcting protocols designed for fiber optic links are inappropriate for FSO links. Comparing the performance effects of different components, such as photodetectors, or protocols, such as forward error correction, in the field is difficult because conditions are constantly changing. On the other hand, laboratory-based turbulence simulators, often using hot plates and fans, do not really simulate the effects of long-range propagation through the atmosphere. We have investigated a different approach. Scintillation has been measured during field tests using FSO terminals by sending a continuous wave beam through the atmosphere. A high dynamic range photodetector was digitized at a 10 KHz rate and files of the intensity variations were saved. Many hours of scintillation data under different environmental conditions and at different sites have been combined into a library of data. A fiber-optic based scintillation playback system was then used in the laboratory to test modems and protocols with the recorded files. This allowed comparisons using the same atmospheric conditions allowing optimization of such parameters as detector dynamic range. It also allowed comparison and optimization of different error correcting protocols.
Error correction for encoded quantum annealing
Pastawski, Fernando; Preskill, John
2016-05-01
Recently, W. Lechner, P. Hauke, and P. Zoller [Sci. Adv. 1, e1500838 (2015), 10.1126/sciadv.1500838] have proposed a quantum annealing architecture, in which a classical spin glass with all-to-all pairwise connectivity is simulated by a spin glass with geometrically local interactions. We interpret this architecture as a classical error-correcting code, which is highly robust against weakly correlated bit-flip noise, and we analyze the code's performance using a belief-propagation decoding algorithm. Our observations may also apply to more general encoding schemes and noise models.
Yang, Liang
2014-12-01
In this study, we consider a relay-assisted free-space optical communication scheme over strong atmospheric turbulence channels with misalignment-induced pointing errors. The links from the source to the destination are assumed to be all-optical links. Assuming a variable gain relay with amplify-and-forward protocol, the electrical signal at the source is forwarded to the destination with the help of this relay through all-optical links. More specifically, we first present a cumulative density function (CDF) analysis for the end-to-end signal-to-noise ratio. Based on this CDF, the outage probability, bit-error rate, and average capacity of our proposed system are derived. Results show that the system diversity order is related to the minimum value of the channel parameters.
Quantum Steganography and Quantum Error-Correction
Shaw, Bilal A
2010-01-01
In the current thesis we first talk about the six-qubit quantum error-correcting code and show its connections to entanglement-assisted error-correcting coding theory and then to subsystem codes. This code bridges the gap between the five-qubit (perfect) and Steane codes. We discuss two methods to encode one qubit into six physical qubits. Each of the two examples corrects an arbitrary single-qubit error. The first example is a degenerate six-qubit quantum error-correcting code. We prove that a six-qubit code without entanglement assistance cannot simultaneously possess a Calderbank-Shor-Steane (CSS) stabilizer and correct an arbitrary single-qubit error. A corollary of this result is that the Steane seven-qubit code is the smallest single-error correcting CSS code. Our second example is the construction of a non-degenerate six-qubit CSS entanglement-assisted code. This code uses one bit of entanglement (an ebit) shared between the sender (Alice) and the receiver (Bob) and corrects an arbitrary single-qubit e...
Turcotte, Randy L.; Wickert, Mark A.
An exact expression is found for the probability of bit error of an FHSS-BFSK (frequency-hopping spread-spectrum/binary-frequency-shift-keying) multiple-access system in the presence of slow, nonselective, 'single-term' Rician fading. The effects of multiple-access interference and/or continuous tone jamming are considered. Comparisons are made between the error expressions developed here and previously published upper bounds. It is found that under certain channel conditions the upper bounds on the probability of bit error may exceed the actual probability of error by an order of magnitude.
Directory of Open Access Journals (Sweden)
Muhammad Audy Bazly
2015-12-01
Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG- DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views