Measurements of Aperture Averaging on Bit-Error-Rate
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Mutual information, bit error rate and security in W\\'{o}jcik's scheme
Zhang, Z
2004-01-01
In this paper the correct calculations of the mutual information of the whole transmission, the quantum bit error rate (QBER) are presented. Mistakes of the general conclusions relative to the mutual information, the quantum bit error rate (QBER) and the security in W\\'{o}jcik's paper [Phys. Rev. Lett. {\\bf 90}, 157901(2003)] have been pointed out and corrected.
Study of bit error rate (BER) for multicarrier OFDM
Alshammari, Ahmed; Albdran, Saleh; Matin, Mohammad
2012-10-01
Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technique that is being used more and more in recent wideband digital communications. It is known for its ability to handle severe channel conditions, the efficiency of spectral usage and the high data rate. Therefore, It has been used in many wired and wireless communication systems such as DSL, wireless networks and 4G mobile communications. Data streams are modulated and sent over multiple subcarriers using either M-QAM or M-PSK. OFDM has lower inter simple interference (ISI) levels because of the of the low data rates of carriers resulting in long symbol periods. In this paper, BER performance of OFDM with respect to signal to noise ratio (SNR) is evaluated. BPSK Modulation is used in s Simulation based system in order to get the BER over different wireless channels. These channels include additive white Gaussian Noise (AWGN) and fading channels that are based on Doppler spread and Delay spread. Plots of the results are compared with each other after varying some of the key parameters of the system such as the IFFT, number of carriers, SNR. The results of the simulation give visualization of what kind of BER to expect when the signal goes through those channels.
Analytical expression for the bit error rate of cascaded all-optical regenerators
DEFF Research Database (Denmark)
Mørk, Jesper; Öhman, Filip; Bischoff, S.
2003-01-01
We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed....
Novel Relations between the Ergodic Capacity and the Average Bit Error Rate
Yilmaz, Ferkan
2012-01-01
Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their...
Novel relations between the ergodic capacity and the average bit error rate
Yilmaz, Ferkan
2011-11-01
Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.
Asymptotic correctability of Bell-diagonal quantum states and maximum tolerable bit error rates
Ranade, K S; Ranade, Kedar S.; Alber, Gernot
2005-01-01
The general conditions are discussed which quantum state purification protocols have to fulfill in order to be capable of purifying Bell-diagonal qubit-pair states, provided they consist of steps that map Bell-diagonal states to Bell-diagonal states and they finally apply a suitably chosen Calderbank-Shor-Steane code to the outcome of such steps. As a main result a necessary and a sufficient condition on asymptotic correctability are presented, which relate this problem to the magnitude of a characteristic exponent governing the relation between bit and phase errors under the purification steps. These conditions allow a straightforward determination of maximum tolerable bit error rates of quantum key distribution protocols whose security analysis can be reduced to the purification of Bell-diagonal states.
A minimum bit error-rate detector for amplify and forward relaying systems
Ahmed, Qasim Zeeshan
2012-05-01
In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.
Ahmed, Qasim Zeeshan
2014-04-01
The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol. © 2002-2012 IEEE.
Threshold based Bit Error Rate Optimization in Four Wave Mixing Optical WDM Systems
Directory of Open Access Journals (Sweden)
Er. Karamjeet Kaur
2016-07-01
Full Text Available Optical communication is communication at a distance using light to carry information which can be performed visually or by using electronic devices. The trend toward higher bit rates in light-wave communication has interest in dispersion-shifted fibre to reduce dispersion penalties. At an equivalent time optical amplifiers have exaggerated interest in wavelength multiplexing. This paper describes optical communication systems where we discuss different optical multiplexing schemes. The effect of channel power depletion due to generation of Four Wave Mixing waves and the effect of FWM cross talk on the performance of a WDM receiver has been studied in this paper. The main focus is to minimize Bit Error Rate to increase the QoS of the optical WDM system.
Bit error rate testing of fiber optic data links for MMIC-based phased array antennas
Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.
1990-06-01
The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.
Alheadary, Wael G.
2016-12-24
In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.
KAPASITAS KANAL DAN BIT ERROR RATE SISTEM D-MIMO DALAM VARIASI SPASIAL DAERAH CAKUPAN
Directory of Open Access Journals (Sweden)
Nyoman Gunantara
2009-05-01
Full Text Available Kemajuan teknologi komunikasi, dikembangkan sistem D-MIMO (Distributed MIMO yang sebelumnya telah digunakan sistem C-MIMO (Conventional co-located MIMO. Sistem C-MIMO menyebabkan penggunaan spektrummenjadi efisien, daya pancar berkurang, dan kapasitas kanal meningkat.Dengan sistem D-MIMO jarak antara pemancar dan penerima dapat diperpendek, macrodiversity dan adanya daerah cakupan layanan. Pada tulisan ini akan diteliti tentang kapasitas kanal dan Bit Error Rate (BER pada variasi spasial daerah cakupan. Penelitian tersebut dilakukan pada kapasitas kanal teoritis dan BER dengan teknik waterfilling.Kapasitas kanal dan kinerja BER pada sistem D-MIMO pada variasi spasial daerah cakupan tergantung dari konfigurasi sistem D-MIMO. Lokasi penerima yang dekat port antena pemancar mempunyai kapasitas kanal yanglebih besar tetapi memiliki kinerja BER yang lebih buruk.
Threshold-Based Bit Error Rate for Stopping Iterative Turbo Decoding in a Varying SNR Environment
Mohamad, Roslina; Harun, Harlisya; Mokhtar, Makhfudzah; Adnan, Wan Azizun Wan; Dimyati, Kaharudin
2017-01-01
Online bit error rate (BER) estimation (OBE) has been used as a stopping iterative turbo decoding criterion. However, the stopping criteria only work at high signal-to-noise ratios (SNRs), and fail to have early termination at low SNRs, which contributes to an additional iteration number and an increase in computational complexity. The failure of the stopping criteria is caused by the unsuitable BER threshold, which is obtained by estimating the expected BER performance at high SNRs, and this threshold does not indicate the correct termination according to convergence and non-convergence outputs (CNCO). Hence, in this paper, the threshold computation based on the BER of CNCO is proposed for an OBE stopping criterion (OBEsc). From the results, OBEsc is capable of terminating early in a varying SNR environment. The optimum number of iterations achieved by the OBEsc allows huge savings in decoding iteration number and decreasing the delay of turbo iterative decoding.
Bit Error Rate Analysis for MC-CDMA Systems in Nakagami- Fading Channels
Directory of Open Access Journals (Sweden)
Li Zexian
2004-01-01
Full Text Available Multicarrier code division multiple access (MC-CDMA is a promising technique that combines orthogonal frequency division multiplexing (OFDM with CDMA. In this paper, based on an alternative expression for the -function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER of multiuser MC-CDMA systems in frequency-selective Nakagami- fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC or equal gain combining (EGC. The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.
SITE project. Phase 1: Continuous data bit-error-rate testing
Fujikawa, Gene; Kerczewski, Robert J.
1992-01-01
The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.
IMPROVING THE PERFORMANCE AND REDUCING BIT ERROR RATE ON WIRELESS DEEP FADING ENVIRONMENT RECEIVERS
Directory of Open Access Journals (Sweden)
K. Jayanthi
2014-01-01
Full Text Available One of the major challenges in wireless communication system is increasing complexity and reducing performance in detecting the received digital information in indoor and outdoor Environments. Consequently to overcome this problem we analyze the delay performance of a multiuser with perfect channel state information transmitting data on deep fading environment. In this proposed system, the Wireless Deep Fading Environment (WDFE creation for causing a Nakagami Multipath Fading Channel of fading figure ‘m’ is used to rectify the delay performance over the existing Rayleigh fading channel. In this WDFE receivers received coherent, synchronized, secured and improved signal strength of information using a Multiuser Coherent Joint Diversity (MCJD with Multi Carrier-Code Division Multiple Access (MC-CDMA. The MCJD in ‘M’ branch of antennas are used to reduce the Bit Error Rate (BER and MC-CDMA method is used to improve the performance. Therefore, in this proposed system we accompany with MCJD and MC-CDMA is very good transceiver for next generation wireless system of an existing 3G wireless system. Overall, this experimental results show improved performance in different multiuser wireless systems under different multipath fading conditions.
Masud, M A; Rahman, M A
2010-01-01
In the beginning of 21st century there has been a dramatic shift in the market dynamics of telecommunication services. The transmission from base station to mobile or downlink transmission using M-ary Quadrature Amplitude modulation (QAM) and Quadrature phase shift keying (QPSK) modulation schemes are considered in Wideband-Code Division Multiple Access (W-CDMA) system. We have done the performance analysis of these modulation techniques when the system is subjected to Additive White Gaussian Noise (AWGN) and multipath Rayleigh fading are considered in the channel. The research has been performed by using MATLAB 7.6 for simulation and evaluation of Bit Error Rate (BER) and Signal-To-Noise Ratio (SNR) for W-CDMA system models. It is shows that the analysis of Quadrature phases shift key and 16-ary Quadrature Amplitude modulations which are being used in wideband code division multiple access system, Therefore, the system could go for more suitable modulation technique to suit the channel quality, thus we can d...
Directory of Open Access Journals (Sweden)
Claude D'Amours
2011-01-01
Full Text Available We analytically derive the upper bound for the bit error rate (BER performance of a single user multiple input multiple output code division multiple access (MIMO-CDMA system employing parity-bit-selected spreading in slowly varying, flat Rayleigh fading. The analysis is done for spatially uncorrelated links. The analysis presented demonstrates that parity-bit-selected spreading provides an asymptotic gain of 10log(Nt dB over conventional MIMO-CDMA when the receiver has perfect channel estimates. This analytical result concurs with previous works where the (BER is determined by simulation methods and provides insight into why the different techniques provide improvement over conventional MIMO-CDMA systems.
Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications
Shalkhauser, Kurt A.
1987-01-01
Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.
Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas
Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.
1990-01-01
The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.
Liang, Bin; Gunawan, Erry; Law, Choi Look; Teh, Kah Chan
Analytical expressions based on the Gauss-Chebyshev quadrature (GCQ) rule technique are derived to evaluate the bit-error rate (BER) for the time-hopping pulse position modulation (TH-PPM) ultra-wide band (UWB) systems under a Nakagami-m fading channel. The analyses are validated by the simulation results and adopted to assess the accuracy of the commonly used Gaussian approximation (GA) method. The influence of the fading severity on the BER performance of TH-PPM UWB system is investigated.
Ahmed, Qasim Zeeshan
2013-01-01
In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.
Cox, Christina B.; Coney, Thom A.
1999-01-01
The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.
Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C
2016-06-01
We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.
Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link
Directory of Open Access Journals (Sweden)
Matteo Berioli
2007-05-01
Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.
Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link
Directory of Open Access Journals (Sweden)
Berioli Matteo
2007-01-01
Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.
Krishnan, Prabu; Sriram Kumar, D.
2014-12-01
Free-space optical communication (FSO) is emerging as a captivating alternative to work out the hindrances in the connectivity problems. It can be used for transmitting signals over common lands and properties that the sender or receiver may not own. The performance of an FSO system depends on the random environmental conditions. The bit error rate (BER) performance of differential phase shift keying FSO system is investigated. A distributed strong atmospheric turbulence channel with pointing error is considered for the BER analysis. Here, the system models are developed for single-input, single-output-FSO (SISO-FSO) and single-input, multiple-output-FSO (SIMO-FSO) systems. The closed-form mathematical expressions are derived for the average BER with various combining schemes in terms of the Meijer's G function.
Li, Mi; Li, Bowen; Zhang, Xuping; Song, Yuejiang; Liu, Jia; Tu, Guojie
2015-08-01
Space optical communication technique is attracting increasingly more attention because it owns advantages such as high security and great communication quality compared with microwave communication. As the space optical communication develops, people have already achieved the communication at data rate of Gb/s currently. The next generation for space optical system have goal of the higher data rate of 40Gb/s. However, the traditional optical communication system cannot satisfy it when the data rate of system is at such high extent. This paper will introduce ground optical communication system of 40Gb/s data rate as to achieve the space optical communication at high data rate. Speaking of the data rate of 40Gb/s, we must apply waveguide modulator to modulate the optical signal and magnify this signal by laser amplifier. Moreover, the more sensitive avalanche photodiode (APD) will be as the detector to increase the communication quality. Based on communication system above, we analyze character of communication quality in downlink of space optical communication system when data rate is at the level of 40Gb/s. The bit error rate (BER) performance, an important factor to justify communication quality, versus some parameter ratios is discussed. From results, there exists optimum ratio of gain factor and divergence angle, which shows the best BER performance. We can also increase ratio of receiving diameter and divergence angle for better communication quality. These results can be helpful to comprehend the character of optical communication system at high data rate and contribute to the system design.
Directory of Open Access Journals (Sweden)
James Osuru Mark
2011-01-01
Full Text Available The multicarrier code division multiple access (MC-CDMA system has received a considerable attention from researchers owing to its great potential in achieving high data rates transmission in wireless communications. Due to the detrimental effects of multipath fading the performance of the system degrades. Similarly, the impact of non-orthogonality of spreading codes can exist and cause interference. This paper addresses the performance of multicarrier code division multiple access system under the influence of frequency selective generalized η-µ fading channel and multiple access interference caused by other active users to the desired one. We apply Gaussian approximation technique to analyse the performance of the system. The avearge bit error rate is derived and expressed in Gauss hypergeometic functions. Maximal ratio combining diversity technique is utilized to alleviate the deleterious effect of multipath fading. We observed that the system performance improves when the parameter η increase or decreasse in format 1 or format 2 conditions respectively.
Li, Jia Wen; Chen, Xi Mei; Pun, Sio Hang; Mak, Peng Un; Gao, Yue Ming; Vai, Mang I; Du, Min
2013-01-01
Bit error rate (BER), which indicates the reliability of communicate channel, is one of the most important values in all kinds of communication system, including intra-body communication (IBC). In order to know more about IBC channel, this paper presents a new method of BER estimation for galvanic-type IBC using experimental eye-diagram and jitter characteristics. To lay the foundation for our methodology, the fundamental relationships between eye-diagram, jitter and BER are first reviewed. Then experiments based on human lower arm IBC are carried out using quadrature phase shift keying (QPSK) modulation scheme and 500 KHz carries frequency. In our IBC experiments, the symbol rate is from 10 Ksps to 100 Ksps, with two transmitted power settings, 0 dBm and -5 dBm. Finally, the BER results were obtained after calculation by experimental data through the relationships among eye-diagram, jitter and BER. These results are then compared with theoretical values and they show good agreement, especially when SNR is between 6 dB to 11 dB. Additionally, these results demonstrate assuming the noise of galvanic-type IBC channel as Additive White Gaussian Noise (AWGN) in previous study is applicable.
Bit error rate analysis of Wi-Fi and bluetooth under the interference of 2.45 GHz RFID
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
IEEE 802.11b WLAN (Wi-Fi) and IEEE 802.15.1 WPAN (bluetooth) are prevalent nowadays, and radio frequency identification (RFID) is an emerging technology which has wider applications. 802.11b occupies unlicensed industrial, scientific and medical (ISM) band (2.4-2.483 5 GHz) and uses direct sequence spread spectrum (DSSS) to alleviate the narrow band interference and fading. Bluetooth is also one user of ISM band and adopts frequency hopping spread spectrum (FHSS) to avoid the mutual interference. RFID can operate on multiple frequency bands, such as 135 KHz, 13.56 MHz and 2.45 GHz. When 2.45 GHz RFID device, which uses FHSS, collocates with 802.11b or bluetooth, the mutual interference is inevitable. Although DSSS and FHSS are applied to mitigate the interference, their performance degradation may be very significant. Therefore, in this article, the impact of 2.45 GHz RFID on 802.11b and bluetooth is investigated. Bit error rate (BER) of 802.11b and bluetooth are analyzed by establishing a mathematical model, and the simula-tion results are compared with the theoretical analysis to justify this mathematical model.
Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo
2016-01-01
We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
Kory, Carol L.
2001-01-01
prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.
Bit error rate analysis of X-ray communication system%X射线通信系统的误码率分析∗
Institute of Scientific and Technical Information of China (English)
王律强; 苏桐; 赵宝升; 盛立志; 刘永安; 刘舵
2015-01-01
X-ray communication, which was firstly introduced by Keithe Gendreau in 2007, is potential to compete with conventional communication methods, such as microwave and laser communication, against space surroundings. As a result, a great deal of time and effort has been devoted to making the initial idea into reality in recent years. Eventually, the X-ray communication demonstration system based on the grid-controlled X-ray source and microchannel plate detector can deliver both audio and video information in a 6-meter vacuum tunnel. The point is how to evaluate this space X-ray demonstration system in a typical experimental way. The method is to design a specific board to measure the relationship between bit-error-rate and emitting power against various communicating distances. In addition, the data should be compared with the calculation and simulation results to estimate the referred theoretical model. The concept of using X-ray as signal carriers is confirmed by our first generation X-ray communication demonstration system. Specifically, the method is to use grid-controlled emission source as a transceiver while implementing the photon counting detector which can be regarded as an important orientation of future deep-space X-ray communication applications. As the key specification of any given communication system, bit-error-rate level should be informed first. In addition, the theoretical analysis by using Poisson noise model also has been implemented to support this novel communication concept. Previous experimental results indicated that the X-ray audio demonstration system requires a 10−4 bit-error-rate level with 25 kbps communication rate. The system bit-error-rate based on on-off keying (OOK) modulation is calculated and measured, which corresponds to the theoretical calculation commendably. Another point that should be taken into consideration is the emitting energy, which is the main restriction of current X-ray communication system. The designed
Munshi Mahbubur Rahman; Satya Prasad Majumder
2015-01-01
An analytical approach is presented to evaluate the bit error rate (BER) performance of a power line (PL) communication system considering the combined influence of impulsive noise and background PL Gaussian noise. Middleton class-A noise model is considered to evaluate the effect of impulsive noise. The analysis is carried out to find the expression of the signal-to-noise ratio and BER considering orthogonal frequency division multiplexing (OFDM) with binary phase shift keying modulation wit...
DEFF Research Database (Denmark)
Gibbon, Timothy Braidwood; Yu, Xianbin; Tafur Monroy, Idelfonso
2009-01-01
We propose the novel generation of photonic ultra-wideband signals using an uncooled DFB laser. For the first time we experimentally demonstrate bit-for-bit DSP BER measurements for transmission of a 781.25 Mbit/s photonic UWB signal.......We propose the novel generation of photonic ultra-wideband signals using an uncooled DFB laser. For the first time we experimentally demonstrate bit-for-bit DSP BER measurements for transmission of a 781.25 Mbit/s photonic UWB signal....
DEFF Research Database (Denmark)
Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso
2010-01-01
We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self......-heterodyne receivers perform down-conversion of radio frequency (RF) subcarrier signal. A theoretical model including noise analysis is constructed to calculate the Q factor and estimate the BER performance. Furthermore, we experimentally validate our prediction in the theoretical modeling. Both the experimental...
Kikuchi, Kazuro
2012-02-27
We develop a systematic method for characterizing semiconductor-laser phase noise, using a low-speed offline digital coherent receiver. The field spectrum, the FM-noise spectrum, and the phase-error variance measured with such a receiver can completely describe phase-noise characteristics of lasers under test. The sampling rate of the digital coherent receiver should be much higher than the phase-fluctuation speed. However, 1 GS/s is large enough for most of the single-mode semiconductor lasers. In addition to such phase-noise characterization, interpolating the taken data at 1.25 GS/s to form a data stream at 10 GS/s, we can predict the bit-error rate (BER) performance of multi-level modulated optical signals at 10 Gsymbol/s. The BER degradation due to the phase noise is well explained by the result of the phase-noise measurements.
Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken
2011-04-01
A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.
Directory of Open Access Journals (Sweden)
Sulyman Ahmed Iyanda
2005-01-01
Full Text Available The severity of fading on mobile communication channels calls for the combining of multiple diversity sources to achieve acceptable error rate performance. Traditional approaches perform the combining of the different diversity sources using either the conventional selective diversity combining (CSC, equal-gain combining (EGC, or maximal-ratio combining (MRC schemes. CSC and MRC are the two extremes of compromise between performance quality and complexity. Some researches have proposed a generalized selection combining scheme (GSC that combines the best M branches out of the L available diversity resources (M ≤ L . In this paper, we analyze a generalized selection combining scheme based on a threshold criterion rather than a fixed-size subset of the best channels. In this scheme, only those diversity branches whose energy levels are above a specified threshold are combined. Closed-form analytical solutions for the BER performances of this scheme over Nakagami fading channels are derived. We also discuss the merits of this scheme over GSC.
Directory of Open Access Journals (Sweden)
Ibrahim A.Z. Qatawneh
2005-01-01
Full Text Available Digital communications systems use Multi tone Channel (MC transmission techniques with differentially encoded and differentially coherent demodulation. Today there are two principle MC application, one is for the high speed digital subscriber loop and the other is for the broadcasting of digital audio and video signals. In this study the comparison of multi carriers with OQPSK and Offset 16 QAM for high-bit rate wireless applications are considered. The comparison of Bit Error Rate (BER performance of Multi tone Channel (MC with offset quadrature amplitude modulation (Offset 16 QAM and offset quadrature phase shift keying modulation (OQPSK with guard interval in a fading environment is considered via the use of Monte Carlo simulation methods. BER results are presented for Offset 16 QAM using guard interval to immune the multi path delay for frequency Rayleigh fading channels and for two-path fading channels in the presence of Additive White Gaussian Noise (AWGN. The BER results are presented for Multi tone Channel (MC with differentially Encoded offset 16 Quadrature Amplitude Modulation (offset 16 QAM and MC with differentially Encoded offset quadrature phase shift keying modulation (OQPSK using guard interval for frequency flat Rician channel in the presence of Additive White Gaussian Noise (AWGN. The performance of multitone systems is also compared with equivalent differentially Encoded offset quadrature amplitude modulation (Offset 16 QAM and differentially Encoded offset quadrature phase shift keying modulation (OQPSKwith and without guard interval in the same fading environment.
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.
Institute of Scientific and Technical Information of China (English)
王江安; 赵英俊; 吴荣华; 任席闯
2009-01-01
A theoretical base for using the technology of multibeam transmission and the reception in the ship laser communication systems was provided by the research on the influence of partially coherent beams passing through the strong turbulence on the bit error rate. The relation between system bit error rate and transmission range was obtained under the conditions of different turbulence measurement, transmission laser wavelength and light source coherence parameter by the aid of a method to parse the equation of laser transmission in atmospheric turbulence field, ignoring other noises in the system, but considering the bit error rate caused by atmospheric turbulence only. The result indicates that under the strong turbulence condition, the system bit error rate is increased gradually with the increase of the transmission range when the quantity of transmitting antennas reaches a certain number, but the system bit error rate tends to saturation when the transmission range accretion reaches a definite degree; the bigger the light source coherence parameter increases, the lower the system bit error rate becomes; the more the turbulence inner scale is, the higher the system bit error rate becomes; and the variation of the transmission laser wavelength has no obvious influence on the system bit error rate.%为研究部分相干光通过强湍流对系统误码率的影响,借助对激光在大气湍流场中的传输方程进行解析求解(忽略系统中其他噪声,仅考虑由大气湍流引起的系统误码率),得到不同湍流内尺度、传输激光波长和光源相干参数条件下,系统误码率和传输距离的关系.结果表明:在强湍流条件下,当发射天线数目达到一定时,随着传输距离的增加,系统误码率逐渐增大,但增大到一定程度后趋于饱和;光源相干参数越大,系统误码率越低;湍流内尺度越大,系统误码率越高;传输激光波长的变化对系统误码率无明显影响.
Directory of Open Access Journals (Sweden)
Munshi Mahbubur Rahman
2015-02-01
Full Text Available An analytical approach is presented to evaluate the bit error rate (BER performance of a power line (PL communication system considering the combined influence of impulsive noise and background PL Gaussian noise. Middleton class-A noise model is considered to evaluate the effect of impulsive noise. The analysis is carried out to find the expression of the signal-to-noise ratio and BER considering orthogonal frequency division multiplexing (OFDM with binary phase shift keying modulation with coherent demodulation of OFDM sub-channels. The results are evaluated numerically considering the multipath transfer function model of PL with non-flat power spectral density of PL background noise over a bandwidth of 0.3–100 MHz. The results are plotted for several system and noise parameters and penalty because of impulsive noise is determined at a BER of 10^−6. The computed results show that the system suffers significant power penalty because of impulsive noise which is higher at higher channel bandwidth and can be reduced by increasing the number of OFDM subcarriers to some extent. The analytical results conform well with the simulation results reported earlier.
Differentiated Bit Error Rate Estimation for Wireless Networks%无线网络的差异化比特错误率估计方法
Institute of Scientific and Technical Information of China (English)
张招亮; 陈海明; 黄庭培; 崔莉
2014-01-01
在无线网络中,比特错误率(bit error rate,BER)的估计是许多上层协议的基础,对数据传输的性能具有重要的影响,目前已成为一个重要的研究课题.但是现有BER估计编码未考虑实际网络的BER分布特征,估计误差较大.在实测分析802.11无线网络的BER分布特征的基础上,提出了一种采用差异化思想来提高BER估计准确度的方法差异化估错码(differentiated error estimation,DEE),其主要思想是在数据包中插入具有不同估错能力的多级估错位,并随机均匀地分布各估错位.然后,借助BER与奇偶校验错误概率的理论关系来估计BER.此外,DEE利用BER非均匀分布特征来优化各级估错位的能力,提高出现概率较高的BER的估计准确度,以降低平均估计误差.在7个节点组成的测试床上评价了DEE的性能.实验结果表明,与最近的研究成果估错码(error estimation code,EEC)相比,DEE可将估计误差平均减少约44％.当估错冗余较低时DEE可将估计误差减少约68％.此外,DEE具有比EEC更小的估计偏差.
Evaluation of bit errors in different types demodulation discrete signals
Directory of Open Access Journals (Sweden)
V. M. Kychak
2015-12-01
Full Text Available Introduction. The introduction describes the main characteristics of bit errors. These sources of bit errors in discrete channels. Also listed scientists who worked on the study opportunities monitoring bit error in discrete channels. The main purpose of the article is to conduct research and theoretical modeling of processes in discrete channels to control error measurement and forecasting parameter BER bit error depending on the signal/noise ratio. Theoretical analysis. A comparison of some types of digital modulation for effective use in systems transmitting information. Comparisons were made using the correlation function, power spectral density and distance between signals. It was proved that through this you can control the real value of bit error (BER for each type of modulation. Important here is the dependence of the BER signal/noise ratio in the test communication channel. Determined that efficiency could be described bit error probability value of the output of the receiver, which is determined by the expression (4. Control parameters BER modulation in digital signals. At this paragraph the examples of control parameter BER at different modulation signals. The simulation results show that with an increase in the signal/noise bit error probability for different types of demodulation will decrease.
无线传感器网络中比特错误率估计方法%Differentia-based bit error rate estimation method in wireless sensor network
Institute of Scientific and Technical Information of China (English)
裴祥喜; 崔炳德; 李珉; 周志敏
2014-01-01
The performance of interference detection in wireless sensor network depends on the performance of bit error rate (BER) estimation, however, the existed BER estimation methods are either too complicate to imple-ment or with low precision. To solve the problem, differentia error estimation (DEE) method is proposed to enhance the precision of BER. The main idea is to insert multi-level error estimation bits that with different error estimation ability, which are random and uniformly distributed, into the sender’s packets. And the receivers estimate the BER by using the relation between the BER and parity check. Meanwhile, DEE optimizes the ability of error estimation bit of each level by making use of BER’s feature of non-uniform distribution, to enhance the estimate precision of BER with higher probability and lower the average estimation error. The experiments shows that, compared with the error estimating coding (EEC) method, the average estimation error decreases 44%, and the estimation error decreas-es as much as 68%when the redundancy is decreased.%由于无线传感器网络中干扰检测的性能依赖于比特错误率(Bit Error Rate, BER)估计的准确性，而现有的比特错误率估计方法或者难以实现，或者准确性差。针对这个问题，本文提出了采用差异化思想来提高比特错误率估计准确度的方法DEE(Differentiated Error Estimation)。其主要是发送方在数据包中插入具有不同估错能力的多级估错位，并随机均匀地分布所有估错位。接收方借助BER与奇偶校验失败概率的理论关系来估计BER。同时，DEE利用BER非均匀分布特征来优化各级估错位的能力，提高出现概率高的BER的估计准确度，以降低平均估计误差。实验结果表明，与现有方法EEC相比，DEE可将估计误差平均减少约44%。当估错冗余较低时，DEE可将估计误差减少约68%。
Ultra low bit-rate speech coding
Ramasubramanian, V
2015-01-01
"Ultra Low Bit-Rate Speech Coding" focuses on the specialized topic of speech coding at very low bit-rates of 1 Kbits/sec and less, particularly at the lower ends of this range, down to 100 bps. The authors set forth the fundamental results and trends that form the basis for such ultra low bit-rates to be viable and provide a comprehensive overview of various techniques and systems in literature to date, with particular attention to their work in the paradigm of unit-selection based segment quantization. The book is for research students, academic faculty and researchers, and industry practitioners in the areas of speech processing and speech coding.
Rate Control for MPEG-4 Bit Stream
Institute of Scientific and Technical Information of China (English)
王振洲; 李桂苓
2003-01-01
For a very long time video processing dealt exclusively with fixed-rate sequences of rectangular shaped images. However, interest has been recently moving toward a more flexible concept in which the subject of the processing and encoding operations is a set of visual elements organized in both time and space in a flexible and arbitrarily complex way. The moving picture experts group (MPEG-4) standard supports this concept and its verification model (VM) encoder has adopted scalable rate control (SRC) as the rate control scheme, which is based on the spatial domain and compatible with constant bit rate (CBR) and variable bit rate (VBR). In this paper,a new rate control algorithm based on the DCT domain instead of the pixel domain is presented. More-over, macroblock level rate control scheme to compute the quantization step for each macroblock has been adopted. The experimental results show that the new algorithm can achieve a much better result than the original one in both peak signal-to-noise ratio (PSNR) and the coding bits, and that the new algorithm is more flexible than test model 5 (TM5) rate control algorithm.
Reading boundless error-free bits using a single photon
Guha, Saikat; Shapiro, Jeffrey H.
2013-06-01
We address the problem of how efficiently information can be encoded into and read out reliably from a passive reflective surface that encodes classical data by modulating the amplitude and phase of incident light. We show that nature imposes no fundamental upper limit to the number of bits that can be read per expended probe photon and demonstrate the quantum-information-theoretic trade-offs between the photon efficiency (bits per photon) and the encoding efficiency (bits per pixel) of optical reading. We show that with a coherent-state (ideal laser) source, an on-off (amplitude-modulation) pixel encoding, and shot-noise-limited direct detection (an overly optimistic model for commercial CD and DVD drives), the highest photon efficiency achievable in principle is about 0.5 bits read per transmitted photon. We then show that a coherent-state probe can read unlimited bits per photon when the receiver is allowed to make joint (inseparable) measurements on the reflected light from a large block of phase-modulated memory pixels. Finally, we show an example of a spatially entangled nonclassical light probe and a receiver design—constructible using a single-photon source, beam splitters, and single-photon detectors—that can in principle read any number of error-free bits of information. The probe is a single photon prepared in a uniform coherent superposition of multiple orthogonal spatial modes, i.e., a W state. The code and joint-detection receiver complexity required by a coherent-state transmitter to achieve comparable photon efficiency performance is shown to be much higher in comparison to that required by the W-state transceiver, although this advantage rapidly disappears with increasing loss in the system.
Low bit rate near-transparent image coding
Zhu, Bin; Tewfik, Ahmed H.; Gerek, Oemer N.
1995-04-01
In this paper, we describe an improved version of our previous approach for low bit rate near- perceptually transparent image compression. The method exploits both frequency and spatial domain visual masking effects and uses a combination of Fourier and wavelet representations to encode different bands. The frequency domain masking model is based on the psychophysical masking experimental data of sinusoidal patterns while the spatial domain masking is computed with a modified version of Girod's model. A discrete cosine transform is used in conjunction with frequency domain masking to encode the low frequency subimages. The medium and high frequency subimages are encoded in the wavelet domain with spatial domain masking. The main improvement over our previous technique is that a better model is used to calculate the tolerable error level for the subimages in the wavelet domain, and a boundary control is used to prevent or reduce the ringing noise in the decoded image. This greatly improves the decoded image quality for the same coding bit rates. Experiments show the approach can achieve very high quality to nearly transparent compression at bit rates of 0.2 to 0.4 bits/pixel for the image Lena.
Gurkin, N. V.; Konyshev, V. A.; Nanii, O. E.; Novikov, A. G.; Treshchikov, V. N.; Ubaydullaev, R. R.
2015-01-01
We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s-1 DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on the optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 - 50 km up to a maximum length of 250 km.
Research and implementation of the burst-mode optical signal bit-error test
Huang, Qiu-yuan; Ma, Chao; Shi, Wei; Chen, Wei
2009-08-01
On the basis of the characteristic of TDMA uplink optical signal of PON system, this article puts forward a method of high-speed optical burst bit-error rate testing based on FPGA. The article proposes a new method of generating the burst signal pattern include user-defined pattern and pseudo-random pattern, realizes the slip synchronization, self-synchronization of error detection using data decomposition technique and the traditional irrigation code synchronization technology, completes high-speed burst signal clock synchronization using the rapid synchronization technology of phase-locked loop delay in the external circuit and finishes the bit-error rate test of high-speed burst optical signal.
Circuit and interconnect design for high bit-rate applications
Veenstra, H.
2006-01-01
This thesis presents circuit and interconnect design techniques and design flows that address the most difficult and ill-defined aspects of the design of ICs for high bit-rate applications. Bottlenecks in interconnect design, circuit design and on-chip signal distribution for high bit-rate applicati
Institute of Scientific and Technical Information of China (English)
李菲; 吴毅; 侯再红
2012-01-01
自由空间光通信(FSO)系统的性能由于受大气湍流影响会产生剧烈波动.根据系统和大气参数评估系统差错性能的研究具有现实意义.以大气湍流信道和光电探测两个模型为基础,建立了FSO系统差错性能的数学仿真模型,提出了湍流条件下系统误码率计算公式.对仿真结果与弱湍流条件下获得的实验数据进行了比较,并依据此模型对光强起伏和背景噪声等因素的影响进行仿真.仿真结果表明,基于该模型的仿真结果与实验数据一致,光强起伏是引起系统性能波动的主要因素,最优判决阈值需根据实际大气条件进行调整.该模型可有效评估湍流条件下FSO系统性能,并为相关理论研究提供参考.%Performance of free-space optical communication (FSO) system fluctuates greatly due to influence by atmospheric turbulence. Research about evaluating system error performance according to parameters of system and atmosphere is a subject of current interest. Based on both optical turbulence channel and photoelectric detection model, a ma thematic simulation model of error performance for FSO system is established, and an expression of bit error rate for FSO system through turbulent atmosphere is proposed. Results of simulation are compared with experimental data obtained under weak turbulence condition and the model is used to characterize factors in turbulence, such as intensity fluctuation and background noise, etc. Simulation results are shown to be consistent with experimental data, intensity fluctuation is a chief factor of system performance fluctuation, and optimized threshold should be adjusted according to pratical atmosphere. The presented model can lead to an efficient performance evaluation and provide reference to correlative theoretical researches.
Multi-bit soft error tolerable L1 data cache based on characteristic of data value
Institute of Scientific and Technical Information of China (English)
WANG Dang-hui; LIU He-peng; CHEN Yi-ran
2015-01-01
Due to continuous decreasing feature size and increasing device density, on-chip caches have been becoming susceptible to single event upsets, which will result in multi-bit soft errors. The increasing rate of multi-bit errors could result in high risk of data corruption and even application program crashing. Traditionally, L1 D-caches have been protected from soft errors using simple parity to detect errors, and recover errors by reading correct data from L2 cache, which will induce performance penalty. This work proposes to exploit the redundancy based on the characteristic of data values. In the case of a small data value, the replica is stored in the upper half of the word. The replica of a big data value is stored in a dedicated cache line, which will sacrifice some capacity of the data cache. Experiment results show that the reliability of L1 D-cache has been improved by 65% at the cost of 1% in performance.
Institute of Scientific and Technical Information of China (English)
赵乐; 宋爱民; 刘剑; 薛斌; 郭兴阳
2015-01-01
针对传统的卫星通信频段( C频段、Ku频段、Ka频段)应用逐渐趋于饱和的问题，研究了更高的W频段卫星通信。该频段带宽更宽，可以支持更高的数据传输速率，同等条件下天线尺寸更小并可获得更大的天线增益。正交频分复用技术具有很高的频带利用率，适合于高速数据传输，在W频段具有很好的应用前景。参照“IKNOW”项目中的链路预算，采用固态功放的Rapp模型，对OFDM技术应用在W频段卫星通信中的误码性能进行了仿真分析。仿真结果表明：在考虑功放非线性时，存在一个最佳的输入功率补偿点，使OFDM系统的误码率最低，并且发射功率越高，最佳补偿点对应的误码率越低。%Since traditional band applications ( C-band, Ku-band, Ka-band) for satellite communication will gradually tend to be saturated, the higher W-band was studied. W-band has a wider bandwidth, supports a higher data rate, acquires a smaller equipment size and a bigger antenna gain under the same conditions. Orthogonal Frequency Division Multiplexing ( OFDM) technology has a high spectrum efficiency, which makes it be suitable for high data rate transmission, so it has a good application prospect in W-band. Referring to the link budget of “IKNOW” project and using the Rapp model of solid-state power amplifier, the Bit Error Rate ( BER) of OFDM in W-band was simulated and analyzed. The simulation results show that there is a best input power backoff point which makes the BER be lowest when the nonlinear power amplifier is taken in account, and the higher the transmission power is, the lower the BER of the best input power backoff point will be.
Hao, Chen; Liyuan, Liu; Dongmei, Li; Chun, Zhang; Zhihua, Wang
2010-10-01
A 12-bit intrinsic accuracy digital-to-analog converter integrated into standard digital 0.18 μm CMOS technology is proposed. It is based on a current steering segmented 6+6 architecture and requires no calibration. By dividing one most significant bit unary source into 16 elements located in 16 separated regions of the array, the linear gradient errors and quadratic errors can be averaged and eliminated effectively. A novel static performance testing method is proposed. The measured differential nonlinearity and integral nonlinearity are 0.42 and 0.39 least significant bit, respectively. For 12-bit resolution, the converter reaches an update rate of 100 MS/s. The chip operates from a single 1.8 V voltage supply, and the core die area is 0.28 mm2.
Payment Error Rate Measurement (PERM)
U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...
Euclidean Geometry Codes, minimum weight words and decodable error-patterns using bit-flipping
DEFF Research Database (Denmark)
Høholdt, Tom; Justesen, Jørn; Jonsson, Bergtor
2005-01-01
We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns.......We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns....
Low-bit-rate subband image coding with matching pursuits
Rabiee, Hamid; Safavian, S. R.; Gardos, Thomas R.; Mirani, A. J.
1998-01-01
In this paper, a novel multiresolution algorithm for low bit-rate image compression is presented. High quality low bit-rate image compression is achieved by first decomposing the image into approximation and detail subimages with a shift-orthogonal multiresolution analysis. Then, at the coarsest resolution level, the coefficients of the transformation are encoded by an orthogonal matching pursuit algorithm with a wavelet packet dictionary. Our dictionary consists of convolutional splines of up to order two for the detail and approximation subbands. The intercorrelation between the various resolutions is then exploited by using the same bases from the dictionary to encode the coefficients of the finer resolution bands at the corresponding spatial locations. To further exploit the spatial correlation of the coefficients, the zero trees of wavelets (EZW) algorithm was used to identify the potential zero trees. The coefficients of the presentation are then quantized and arithmetic encoded at each resolution, and packed into a scalable bit stream structure. Our new algorithm is highly bit-rate scalable, and performs better than the segmentation based matching pursuit and EZW encoders at lower bit rates, based on subjective image quality and peak signal-to-noise ratio.
Comodulation masking release in bit-rate reduction systems
DEFF Research Database (Denmark)
Vestergaard, Martin David; Rasmussen, Karsten Bo; Poulsen, Torben
1999-01-01
It has been suggested that the level dependence of the upper masking slope be utilized in perceptual models in bit-rate reduction systems. However, comodulation masking release (CMR) phenomena lead to a reduction of the masking effect when a masker and a probe signal are amplitude modulated...... with the same frequency. In bit-rate reduction systems the masker would be the audio signal and the probe signal would represent the quantization noise. Masking curves have been determined for sinusoids and 1-Bark-wide noise maskers in order to investigate the risk of CMR, when quantizing depths are fixed.......75. A CMR of up to 10 dB was obtained at a distance of 6 Bark above the masker. The amount of CMR was found to depend on the presentation level of the masker; a higher masker level leads to a higher CMR effect. Hence, the risk of CMR affecting the subjective performance of bit-rate reduction systems cannot...
CRC Look-up Table Optimization for Single-Bit Error Correction
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Many communication systems use the cyclic redundancy code (CRC) technique for protecting key data fields from transmission errors by enabling both single-bit error correction and multi-bit error detection. The look-up table design is very important for the error-correction implementation. This paper presents a CRC look-up table optimization method for single-bit error correction. The optimization method minimizes the address length of the pre-designed look-up table while satisfying certain restrictions. The circuit implementation is also presented to show the feasibility of the method in the application specific integrated circuit design. An application of the optimization method in the generic framing procedure protocol is implemented using field programmable gate arrays. The result shows that the memory address length has been minimized, while keeping a very simple circuit implementation.
Comodulation masking release in bit-rate reduction systems
DEFF Research Database (Denmark)
Vestergaard, Martin D.; Rasmussen, Karsten Bo; Poulsen, Torben
1999-01-01
It has been suggested that the level dependence of the upper masking slopebe utilised in perceptual models in bit-rate reduction systems. However,comodulation masking release (CMR) phenomena lead to a reduction of themasking effect when a masker and a probe signal are amplitude modulated withthe...... same frequency. In bit-rate reduction systems the masker would be theaudio signal and the probe signal would represent the quantization noise.Masking curves have been determined for sinusoids and 1-Bark-wide noisemaskers in order to investigate the risk of CMR, when quantizing depths arefixed...
Techniques of Very Low Bit-Rate Speech Coding1
Institute of Scientific and Technical Information of China (English)
CUIHuijuan; TANGKun; ZHAOMing; ZHANGXin
2004-01-01
Techniques of very low bit-rate speech coding,such as lower than 800 bps are presented in this paper. The techniques of multi-frame, multi-sub-band, multimodel, and vector quantization etc. are effective to decrease the bit-rate of vocoders based on a linear prediction model. These techniques bring the vocoder not only high quality of the reconstructed speech, but also robustness.Vocoders which apply those techniques can synthesize clear and intelligent speech with some naturalness. The mean DRT (Diagnostic rhyme test) score of an 800 bps vocoder is 89.2% and 86.3% for a 600 bps vocoder.
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
Comprehensive Error Rate Testing (CERT)
U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...
Telemetry System Parameters and Bit Error Performance of NRZ and DM PCM/FM
1976-03-29
MODEL 400 4·POLE, LINEAR L ________ J PHASE FILTER ANDOM OM PSEUDO·R 0 PSEUDO·RA R NDOM :’olRZ EMR 721 TELEMETRY RECOVERED PCM SEQUENCE BIT...with variable bit rate. The NRZ and DM signals were pseudo-random sequences of 2,047 bits. The pre- modulation filter was a four-pole, linear phase ... filter with adjustable bandwidth. RF transmitter devia- tion and RF attenuation were adjustable on the FM signal generator which was operated at
Biometric Quantization through Detection Rate Optimized Bit Allocation
Chen, C.; Veldhuis, R. N. J.; Kevenaar, T. A. M.; Akkermans, A. H. M.
2009-12-01
Extracting binary strings from real-valued biometric templates is a fundamental step in many biometric template protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Previous work has been focusing on the design of optimal quantization and coding for each single feature component, yet the binary string—concatenation of all coded feature components—is not optimal. In this paper, we present a detection rate optimized bit allocation (DROBA) principle, which assigns more bits to discriminative features and fewer bits to nondiscriminative features. We further propose a dynamic programming (DP) approach and a greedy search (GS) approach to achieve DROBA. Experiments of DROBA on the FVC2000 fingerprint database and the FRGC face database show good performances. As a universal method, DROBA is applicable to arbitrary biometric modalities, such as fingerprint texture, iris, signature, and face. DROBA will bring significant benefits not only to the template protection systems but also to the systems with fast matching requirements or constrained storage capability.
Biometric Quantization through Detection Rate Optimized Bit Allocation
Directory of Open Access Journals (Sweden)
C. Chen
2009-01-01
Full Text Available Extracting binary strings from real-valued biometric templates is a fundamental step in many biometric template protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Previous work has been focusing on the design of optimal quantization and coding for each single feature component, yet the binary string—concatenation of all coded feature components—is not optimal. In this paper, we present a detection rate optimized bit allocation (DROBA principle, which assigns more bits to discriminative features and fewer bits to nondiscriminative features. We further propose a dynamic programming (DP approach and a greedy search (GS approach to achieve DROBA. Experiments of DROBA on the FVC2000 fingerprint database and the FRGC face database show good performances. As a universal method, DROBA is applicable to arbitrary biometric modalities, such as fingerprint texture, iris, signature, and face. DROBA will bring significant benefits not only to the template protection systems but also to the systems with fast matching requirements or constrained storage capability.
Detecting bit-flip errors in a logical qubit using stabilizer measurements.
Ristè, D; Poletto, S; Huang, M-Z; Bruno, A; Vesterinen, V; Saira, O-P; DiCarlo, L
2015-04-29
Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements.
Optical Switching and Bit Rates of 40 Gbit/s and above
DEFF Research Database (Denmark)
Ackaert, A.; Demester, P.; O'Mahony, M.;
2003-01-01
Optical switching in WDM networks introduces additional aspects to the choice of single channel bit rates compared to WDM transmission systems. The mutual impact of optical switching and bit rates of 40 Gbps and above is discussed.......Optical switching in WDM networks introduces additional aspects to the choice of single channel bit rates compared to WDM transmission systems. The mutual impact of optical switching and bit rates of 40 Gbps and above is discussed....
Efficient rate control scheme for low bit rate H.264/AVC video coding
Institute of Scientific and Technical Information of China (English)
LI Zhi-cheng; ZHANG Yong-jun; LIU Tao; GU Wan-yi
2009-01-01
This article presents an efficient rate control scheme for H.264/AVC video coding in low bit rate environment. In the proposed scheme, an improved rate-distortion (RD) model by both analytical and empirical approaches is developed. It involves an enhanced mean absolute difference estimating method and a more rate-robust distortion model. Based on this RD model, an efficient macroblock-layer rate control scheme for H.264/AVC video coding is proposed. Experimental results show that this model encodes video sequences with higher peak signal-to-noise ratio gains and generates bit stream closer to the target rate.
Noise and measurement errors in a practical two-state quantum bit commitment protocol
Loura, Ricardo; Almeida, Álvaro J.; André, Paulo S.; Pinto, Armando N.; Mateus, Paulo; Paunković, Nikola
2014-05-01
We present a two-state practical quantum bit commitment protocol, the security of which is based on the current technological limitations, namely the nonexistence of either stable long-term quantum memories or nondemolition measurements. For an optical realization of the protocol, we model the errors, which occur due to the noise and equipment (source, fibers, and detectors) imperfections, accumulated during emission, transmission, and measurement of photons. The optical part is modeled as a combination of a depolarizing channel (white noise), unitary evolution (e.g., systematic rotation of the polarization axis of photons), and two other basis-dependent channels, namely the phase- and bit-flip channels. We analyze quantitatively the effects of noise using two common information-theoretic measures of probability distribution distinguishability: the fidelity and the relative entropy. In particular, we discuss the optimal cheating strategy and show that it is always advantageous for a cheating agent to add some amount of white noise—the particular effect not being present in standard quantum security protocols. We also analyze the protocol's security when the use of (im)perfect nondemolition measurements and noisy or bounded quantum memories is allowed. Finally, we discuss errors occurring due to a finite detector efficiency, dark counts, and imperfect single-photon sources, and we show that the effects are the same as those of standard quantum cryptography.
Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.
2015-04-01
Current approaches for building quantum computing devices focus on two-level quantum systems which nicely mimic the concept of a classical bit, albeit enhanced with additional quantum properties. However, rather than artificially limiting the number of states to two, the use of d -level quantum systems (qudits) could provide advantages for quantum information processing. Among other merits, it has recently been shown that multilevel quantum systems can offer increased stability to external disturbances. In this study we demonstrate that topological quantum memories built from qudits, also known as Abelian quantum double models, exhibit a substantially increased resilience to noise. That is, even when taking into account the multitude of errors possible for multilevel quantum systems, topological quantum error-correction codes employing qudits can sustain a larger error rate than their two-level counterparts. In particular, we find strong numerical evidence that the thresholds of these error-correction codes are given by the hashing bound. Considering the significantly increased error thresholds attained, this might well outweigh the added complexity of engineering and controlling higher-dimensional quantum systems.
Rate Distortion Analysis and Bit Allocation Scheme for Wavelet Lifting-Based Multiview Image Coding
Lasang, Pongsak; Kumwilaisak, Wuttipong
2009-12-01
This paper studies the distortion and the model-based bit allocation scheme of wavelet lifting-based multiview image coding. Redundancies among image views are removed by disparity-compensated wavelet lifting (DCWL). The distortion prediction of the low-pass and high-pass subbands of each image view from the DCWL process is analyzed. The derived distortion is used with different rate distortion models in the bit allocation of multiview images. Rate distortion models including power model, exponential model, and the proposed combining the power and exponential models are studied. The proposed rate distortion model exploits the accuracy of both power and exponential models in a wide range of target bit rates. Then, low-pass and high-pass subbands are compressed by SPIHT (Set Partitioning in Hierarchical Trees) with a bit allocation solution. We verify the derived distortion and the bit allocation with several sets of multiview images. The results show that the bit allocation solution based on the derived distortion and our bit allocation scheme provide closer results to those of the exhaustive search method in both allocated bits and peak-signal-to-noise ratio (PSNR). It also outperforms the uniform bit allocation and uniform bit allocation with normalized energy in the order of 1.7-2 and 0.3-1.4 dB, respectively.
Framed bit error rate testing for 100G ethernet equipment
DEFF Research Database (Denmark)
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert;
2010-01-01
The Internet users behavioural patterns are migrating towards bandwidth-intensive applications, which require a corresponding capacity extension. The emerging 100 Gigabit Ethernet (GE) technology is a promising candidate for providing a ten-fold increase of todays available Internet transmission...
Bit-Error Rate Monitor for Laser Communications
2010-03-08
electronics System -i AM • EDFA COL r- { ^^—K Beam Expander DC BIAS Figure 14: Schematics of the basic optical link system implemented in FY#1...Amplifier ( EDFA ), for amplification using standard Single Mode Fibers (SMF). The most important components purchased for this system are listed below...Amplifier ( EDFA ) from IPG Photonics (Model EAR-5K-C). The amplifier operates with a minimum input power of-5dBm, with was suitable to the low power output
High bit rate germanium single photon detectors for 1310nm
Seamons, J. A.; Carroll, M. S.
2008-04-01
There is increasing interest in development of high speed, low noise and readily fieldable near infrared (NIR) single photon detectors. InGaAs/InP Avalanche photodiodes (APD) operated in Geiger mode (GM) are a leading choice for NIR due to their preeminence in optical networking. After-pulsing is, however, a primary challenge to operating InGaAs/InP single photon detectors at high frequencies1. After-pulsing is the effect of charge being released from traps that trigger false ("dark") counts. To overcome this problem, hold-off times between detection windows are used to allow the traps to discharge to suppress after-pulsing. The hold-off time represents, however, an upper limit on detection frequency that shows degradation beginning at frequencies of ~100 kHz in InGaAs/InP. Alternatively, germanium (Ge) single photon avalanche photodiodes (SPAD) have been reported to have more than an order of magnitude smaller charge trap densities than InGaAs/InP SPADs2, which allowed them to be successfully operated with passive quenching2 (i.e., no gated hold off times necessary), which is not possible with InGaAs/InP SPADs, indicating a much weaker dark count dependence on hold-off time consistent with fewer charge traps. Despite these encouraging results suggesting a possible higher operating frequency limit for Ge SPADs, little has been reported on Ge SPAD performance at high frequencies presumably because previous work with Ge SPADs has been discouraged by a strong demand to work at 1550 nm. NIR SPADs require cooling, which in the case of Ge SPADs dramatically reduces the quantum efficiency of the Ge at 1550 nm. Recently, however, advantages to working at 1310 nm have been suggested which combined with a need to increase quantum bit rates for quantum key distribution (QKD) motivates examination of Ge detectors performance at very high detection rates where InGaAs/InP does not perform as well. Presented in this paper are measurements of a commercially available Ge APD
Up to 20 Gbit/s bit-rate transparent integrated interferometric wavelength converter
DEFF Research Database (Denmark)
Jørgensen, Carsten; Danielsen, Søren Lykke; Hansen, Peter Bukhave;
1996-01-01
We present a compact and optimised multiquantum-well based, integrated all-active Michelson interferometer for 26 Gbit/s optical wavelength conversion. Bit-rate transparent operation is demonstrated with a conversion penalty well below 0.5 dB at bit-rates ranging from 622 Mbit/s to 20 Gbit/s....
Application of time-hopping UWB range-bit rate performance in the UWB sensor networks
Nascimento, J.R.V. do; Nikookar, H.
2008-01-01
In this paper, the achievable range-bit rate performance is evaluated for Time-Hopping (TH) UWB networks complying with the FCC outdoor emission limits in the presence of Multiple Access Interference (MAI). Application of TH-UWB range-bit rate performance is presented for UWB sensor networks. Result
Enhanced bit rate-distance product impulse radio ultra-wideband over fiber link
DEFF Research Database (Denmark)
Rodes Lopez, Roberto; Jensen, Jesper Bevensee; Caballero Jambrina, Antonio
2010-01-01
We report on a record distance and bit rate-wireless impulse radio (IR) ultra-wideband (UWB) link with combined transmission over a 20 km long fiber link. We are able to improve the compliance with the regulated frequency emission mask and achieve bit rate-distance products as high as 16 Gbit/s·m....
Warped Discrete Cosine Transform-Based Low Bit-Rate Block Coding Using Image Downsampling
Directory of Open Access Journals (Sweden)
Ertürk Sarp
2007-01-01
Full Text Available This paper presents warped discrete cosine transform (WDCT-based low bit-rate block coding using image downsampling. While WDCT aims to improve the performance of conventional DCT by frequency warping, the WDCT has only been applicable to high bit-rate coding applications because of the overhead required to define the parameters of the warping filter. Recently, low bit-rate block coding based on image downsampling prior to block coding followed by upsampling after the decoding process is proposed to improve the compression performance for low bit-rate block coders. This paper demonstrates that a superior performance can be achieved if WDCT is used in conjunction with image downsampling-based block coding for low bit-rate applications.
Directory of Open Access Journals (Sweden)
Madeiro Francisco
2010-01-01
Full Text Available Abstract This paper presents an alternative method for determining exact expressions for the bit error probability (BEP of modulation schemes subject to Nakagami- fading. In this method, the Nakagami- fading channel is seen as an additive noise channel whose noise is modeled as the ratio between Gaussian and Nakagami- random variables. The method consists of using the cumulative density function of the resulting noise to obtain closed-form expressions for the BEP of modulation schemes subject to Nakagami- fading. In particular, the proposed method is used to obtain closed-form expressions for the BEP of -ary quadrature amplitude modulation ( -QAM, -ary pulse amplitude modulation ( -PAM, and rectangular quadrature amplitude modulation ( -QAM under Nakagami- fading. The main contribution of this paper is to show that this alternative method can be used to reduce the computational complexity for detecting signals in the presence of fading.
Bit Rate Reduction of FS-1015 Speech Coder Using Fuzzy ARTMAP and KSOFM Neural Networks
Directory of Open Access Journals (Sweden)
Ali Eslamzadeh
2009-03-01
Full Text Available The speech spectrum is very sensitive to linear predictive coding (LPC parameters, so small quantization errors may cause unstable synthesis filter. Line spectral pairs (LSPs are more efficient representations than LPC parameters. On the other hand, artificial neural networks (ANNs have been used successfully to improving the quality and also reduction the computational complexity of speech coders. This work proposes an efficient technique to reduce the bit rate of FS-1015 speech coder, while improving the performance. In this way, LSP parameters are used instead of the LPC parameters. In addition, neural vector quantizers based on Kohonen self-organizing feature map (KSOFM, with a modified-supervised training algorithm, and fuzzy ARTMAP are also employed to reduce the bit rate. By using the mentioned neural vector quantizer models, the quality of synthesized speech, in terms of mean opinion score (MOS, is improved 0.13 and 0.26, respectively. The execution time of proposed models, as compared to FS-1015 standard, is also reduced 27% and 43%, respectively.
An op-amp gain error-compensated SC cyclic D/A converter converting from least significant bit
加藤, 卓; 松本, 寛樹
2011-01-01
In this paper, we propose a switched-capacitor (SC) cyclic digital-to-analog converter (DAC) which compensate for the gain error and the offset voltage of operational amplifier (op-amp). The DAC convert from least significant bit (LSB). Even when the gain of op-amp is poor, compensated circuit can keep up a resolution. Circuit operation is evaluated on SIMetrix. An error analysis presented that shows an accuracy greater than 8-bits, where amplifier gain is 60 dB.
Towards the generation of random bits at terahertz rates based on a chaotic semiconductor laser
Kanter, Ido; Aviad, Yaara; Reidler, Igor; Cohen, Elad; Rosenbluh, Michael
2010-06-01
Random bit generators (RBGs) are important in many aspects of statistical physics and crucial in Monte-Carlo simulations, stochastic modeling and quantum cryptography. The quality of a RBG is measured by the unpredictability of the bit string it produces and the speed at which the truly random bits can be generated. Deterministic algorithms generate pseudo-random numbers at high data rates as they are only limited by electronic hardware speed, but their unpredictability is limited by the very nature of their deterministic origin. It is widely accepted that the core of any true RBG must be an intrinsically non-deterministic physical process, e.g. measuring thermal noise from a resistor. Owing to low signal levels, such systems are highly susceptible to bias, introduced by amplification, and to small nonrandom external perturbations resulting in a limited generation rate, typically less than 100M bit/s. We present a physical random bit generator, based on a chaotic semiconductor laser, having delayed optical feedback, which operates reliably at rates up to 300Gbit/s. The method uses a high derivative of the digitized chaotic laser intensity and generates the random sequence by retaining a number of the least significant bits of the high derivative value. The method is insensitive to laser operational parameters and eliminates the necessity for all external constraints such as incommensurate sampling rates and laser external cavity round trip time. The randomness of long bit strings is verified by standard statistical tests.
Timing-Error Detection Design Considerations in Subthreshold: An 8-bit Microprocessor in 65 nm CMOS
Directory of Open Access Journals (Sweden)
Lauri Koskinen
2012-06-01
Full Text Available This paper presents the first known timing-error detection (TED microprocessor able to operate in subthreshold. Since the minimum energy point (MEP of static CMOS logic is in subthreshold, there is a strong motivation to design ultra-low-power systems that can operate in this region. However, exponential dependencies in subthreshold, require systems with either excessively large safety margins or that utilize adaptive techniques. Typically, these techniques include replica paths, sensors, or TED. Each of these methods adds system complexity, area, and energy overhead. As a run-time technique, TED is the only method that accounts for both local and global variations. The microprocessor presented in this paper utilizes adaptable error-detection sequential (EDS circuits that can adjust to process and environmental variations. The results demonstrate the feasibility of the microprocessor, as well as energy savings up to 28%, when using the TED method in subthreshold. The microprocessor is an 8-bit core, which is compatible with a commercial microcontroller. The microprocessor is fabricated in 65 nm CMOS, uses as low as 4.35 pJ/instruction, occupies an area of 50,000 μm^{2}, and operates down to 300 mV.
Room temperature single-photon detectors for high bit rate quantum key distribution
Energy Technology Data Exchange (ETDEWEB)
Comandar, L. C.; Patel, K. A. [Toshiba Research Europe Ltd., 208 Cambridge Science Park, Milton Road, Cambridge CB4 0GZ (United Kingdom); Engineering Department, Cambridge University, 9 J J Thomson Ave., Cambridge CB3 0FA (United Kingdom); Fröhlich, B., E-mail: bernd.frohlich@crl.toshiba.co.uk; Lucamarini, M.; Sharpe, A. W.; Dynes, J. F.; Yuan, Z. L.; Shields, A. J. [Toshiba Research Europe Ltd., 208 Cambridge Science Park, Milton Road, Cambridge CB4 0GZ (United Kingdom); Penty, R. V. [Engineering Department, Cambridge University, 9 J J Thomson Ave., Cambridge CB3 0FA (United Kingdom)
2014-01-13
We report room temperature operation of telecom wavelength single-photon detectors for high bit rate quantum key distribution (QKD). Room temperature operation is achieved using InGaAs avalanche photodiodes integrated with electronics based on the self-differencing technique that increases avalanche discrimination sensitivity. Despite using room temperature detectors, we demonstrate QKD with record secure bit rates over a range of fiber lengths (e.g., 1.26 Mbit/s over 50 km). Furthermore, our results indicate that operating the detectors at room temperature increases the secure bit rate for short distances.
Theoretical Study of Quantum Bit Rate in Free-Space Quantum Cryptography
Institute of Scientific and Technical Information of China (English)
MA Jing; ZHANG Guang-Yu; TAN Li-Ying
2006-01-01
The quantum bit rate is an important operating parameter in free-space quantum key distribution. We introduce the measuring factor and the sifting factor, and present the expressions of the quantum bit rate based on the ideal single-photon sources and the single-photon sources with Poisson distribution. The quantum bit rate is studied in the numerical simulation for the laser links between a ground station and a satellite in a low earth orbit. The results show that it is feasible to implement quantum key distribution between a ground station and a satellite in a low earth orbit.
Entropy rates of low-significance bits sampled from chaotic physical systems
Corron, Ned J.; Cooper, Roy M.; Blakely, Jonathan N.
2016-10-01
We examine the entropy of low-significance bits in analog-to-digital measurements of chaotic dynamical systems. We find the partition of measurement space corresponding to low-significance bits has a corrugated structure. Using simulated measurements of a map and experimental data from a circuit, we identify two consequences of this corrugated partition. First, entropy rates for sequences of low-significance bits more closely approach the metric entropy of the chaotic system, because the corrugated partition better approximates a generating partition. Second, accurate estimation of the entropy rate using low-significance bits requires long block lengths as the corrugated partition introduces more long-term correlation, and using only short block lengths overestimates the entropy rate. This second phenomenon may explain recent reports of experimental systems producing binary sequences that pass statistical tests of randomness at rates that may be significantly beyond the metric entropy rate of the physical source.
An Experimentally Validated SOA Model for High-Bit Rate System Applications
Institute of Scientific and Technical Information of China (English)
Hasan I. Saleheen
2003-01-01
A comprehensive model of the Semiconductor Optical Amplifier with experimental validation result is presented. This model accounts for various physical behavior of the device which is necessary for high bit-rate system application.
Context-Adaptive Arithmetic Coding Scheme for Lossless Bit Rate Reduction of MPEG Surround in USAC
Yoon, Sungyong; Pang, Hee-Suk; Sung, Koeng-Mo
We propose a new coding scheme for lossless bit rate reduction of the MPEG Surround module in unified speech and audio coding (USAC). The proposed scheme is based on context-adaptive arithmetic coding for efficient bit stream composition of spatial parameters. Experiments show that it achieves the significant lossless bit reduction of 9.93% to 12.14% for spatial parameters and 8.64% to 8.96% for the overall MPEG Surround bit streams compared to the original scheme. The proposed scheme, which is not currently included in USAC, can be used for the improved coding efficiency of MPEG Surround in USAC, where the saved bits can be utilized by the other modules in USAC.
Re-use of Low Bandwidth Equipment for High Bit Rate Transmission Using Signal Slicing Technique
DEFF Research Database (Denmark)
Wagner, Christoph; Spolitis, S.; Vegas Olmos, Juan José;
: Massive fiber-to-the-home network deployment requires never ending equipment upgrades operating at higher bandwidth. We show effective signal slicing method, which can reuse low bandwidth opto-electronical components for optical communications at higher bit rates.......: Massive fiber-to-the-home network deployment requires never ending equipment upgrades operating at higher bandwidth. We show effective signal slicing method, which can reuse low bandwidth opto-electronical components for optical communications at higher bit rates....
A novel dynamic frame rate control algorithm for H.264 low-bit-rate video coding
Institute of Scientific and Technical Information of China (English)
Yang Jing; Fang Xiangzhong
2007-01-01
The goal of this paper is to improve human visual perceptual quality as well as coding efficiency of H.264 video at low bit rate conditions by adaptively adjusting the number of skipped frames. The encoding frames ale selected according to the motion activity of each frame and the motion accumulation of successive frames. The motion activity analysis is based on the statistics of motion vectors and with consideration of the characteristics of H. 264 coding standard. A prediction model of motion accumulation is proposed to reduce complex computation of motion estimation. The dynamic encoding frame rate control algorithm is applied to both the frame level and the GOB (Group of Macroblocks) level. Simulation is done to compare the performance of JM76 with the proposed frame level scheme and GOB level scheme.
Achieving high bit rate logical stochastic resonance in a bistable system by adjusting parameters
Institute of Scientific and Technical Information of China (English)
杨定新; 谷丰收; 冯国金; 杨拥民
2015-01-01
The phenomenon of logical stochastic resonance (LSR) in a nonlinear bistable system is demonstrated by numerical simulations and experiments. However, the bit rates of the logical signals are relatively low and not suitable for practical applications. First, we examine the responses of the bistable system with fixed parameters to different bit rate logic input signals, showing that an arbitrary high bit rate LSR in a bistable system cannot be achieved. Then, a normalized transform of the LSR bistable system is introduced through a kind of variable substitution. Based on the transform, it is found that LSR for arbitrary high bit rate logic signals in a bistable system can be achieved by adjusting the parameters of the system, setting bias value and amplifying the amplitudes of logic input signals and noise properly. Finally, the desired OR and AND logic outputs to high bit rate logic inputs in a bistable system are obtained by numerical simulations. The study might provide higher feasibility of LSR in practical engineering applications.
Achieving high bit rate logical stochastic resonance in a bistable system by adjusting parameters
Yang, Ding-Xin; Gu, Feng-Shou; Feng, Guo-Jin; Yang, Yong-Min; Ball, Andrew
2015-11-01
The phenomenon of logical stochastic resonance (LSR) in a nonlinear bistable system is demonstrated by numerical simulations and experiments. However, the bit rates of the logical signals are relatively low and not suitable for practical applications. First, we examine the responses of the bistable system with fixed parameters to different bit rate logic input signals, showing that an arbitrary high bit rate LSR in a bistable system cannot be achieved. Then, a normalized transform of the LSR bistable system is introduced through a kind of variable substitution. Based on the transform, it is found that LSR for arbitrary high bit rate logic signals in a bistable system can be achieved by adjusting the parameters of the system, setting bias value and amplifying the amplitudes of logic input signals and noise properly. Finally, the desired OR and AND logic outputs to high bit rate logic inputs in a bistable system are obtained by numerical simulations. The study might provide higher feasibility of LSR in practical engineering applications. Project supported by the National Natural Science Foundation of China (Grant No. 51379526).
All-optical repetition rate multiplication of pseudorandom bit sequences based on cascaded TOADs
Sun, Zhenchao; Wang, Zhi; Wu, Chongqing; Wang, Fu; Li, Qiang
2016-03-01
A scheme for all-optical repetition rate multiplication of pseudorandom bit sequences (PRBS) is demonstrated with all-optical wavelength conversion and optical logic gate 'OR' based on cascaded Tera-Hertz Optical Asymmetric Demultiplexers (TOADs). Its feasibility is verified by multiplication experiments from 500 Mb/s to 4 Gb/s for 23-1 PRBS and from 1 Gb/s to 4 Gb/s for 27-1 PRBS. This scheme can be employed for rate multiplication for much longer cycle PRBS at much higher bit rate over 40 Gb/s when the time-delay, the loss and the dispersion of the optical delay line are all precisely managed. The upper limit of bit rate will be restricted by the recovery time of semiconductor optical amplifier (SOA) finally.
Implementation of Convolution Encoder and Viterbi Decoder for Constraint Length 7 and Bit Rate 1/2
Directory of Open Access Journals (Sweden)
Mr. Sandesh Y.M
2013-11-01
Full Text Available Convolutional codes are non blocking codes that can be designed to either error detecting or correcting. Convolution coding has been used in communication systems including deep space communication and wireless communication. At the receiver end the original message sequence is obtained from the received data using Viterbi decoder. It implements Viterbi Algorithm which is a maximum likelihood algorithm, based on the minimum cumulative hamming distance it decides the optimal trellis path that is most likely followed at the encoder. In this paper I present the convolution encoder and Viterbi decoder for constraint length 7 and bit rate 1/2.
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Bit Error Probability (BEP) provides a fundamental performance measure for wireless diversity systems. This paper presents two new exact BEP expressions for Maximal Ratio Combining (MRC) diversity systems. One BEP expression takes a closed form, while the other is derived by treating the squared-sum of Rayleigh random variables as an Erlang variable. Due to the fact that the extant bounds are loose and could not properly characterize the error performance of MRC diversity systems, this paper presents a very tight bound. The numerical analysis shows that the new derived BEP expressions coincide with the extant expressions, and that the new approximation tightly bounds the accurate BEP.
A long lifetime, low error rate RRAM design with self-repair module
Zhiqiang, You; Fei, Hu; Liming, Huang; Peng, Liu; Jishun, Kuang; Shiying, Li
2016-11-01
Resistive random access memory (RRAM) is one of the promising candidates for future universal memory. However, it suffers from serious error rate and endurance problems. Therefore, exploring a technical solution is greatly demanded to enhance endurance and reduce error rate. In this paper, we propose a reliable RRAM architecture that includes two reliability modules: error correction code (ECC) and self-repair modules. The ECC module is used to detect errors and decrease error rate. The self-repair module, which is proposed for the first time for RRAM, can get the information of error bits and repair wear-out cells by a repair voltage. Simulation results show that the proposed architecture can achieve lowest error rate and longest lifetime compared to previous reliable designs. Project supported by the New Century Excellent Talents in University (No. NCET-12-0165) and the National Natural Science Foundation of China (Nos. 61472123, 61272396).
Entanglement enhanced bit rate over multiple uses of a lossy bosonic channel with memory
Lupo, C.; Mancini, S.
2010-03-01
We present a study of the achievable rates for classical information transmission via a lossy bosonic channel with memory, using homodyne detection. A comparison with the memoryless case shows that the presence of memory enhances the bit rate if information is encoded in collective states, i.e., states which are entangled over different uses of the channel.
Adaptive Bit Rate Video Streaming Through an RF/Free Space Optical Laser Link
Directory of Open Access Journals (Sweden)
A. Akbulut
2010-06-01
Full Text Available This paper presents a channel-adaptive video streaming scheme which adjusts video bit rate according to channel conditions and transmits video through a hybrid RF/free space optical (FSO laser communication system. The design criteria of the FSO link for video transmission to 2.9 km distance have been given and adaptive bit rate video streaming according to the varying channel state over this link has been studied. It has been shown that the proposed structure is suitable for uninterrupted transmission of videos over the hybrid wireless network with reduced packet delays and losses even when the received power is decreased due to weather conditions.
An Improved Frame-Layer Bit Allocation Scheme for H.264/AVC Rate Control
Institute of Scientific and Technical Information of China (English)
LIN Gui-xu; ZHENG Shi-bao; ZHU Liang-jia
2009-01-01
In this paper, we aim at improving the video quality degradation due to high motions or scene changes. An improved frame-layer bit allocation scheme for H.264/AVC rate control is proposed. First, current frame is pre-encoded in 16×16 modes with a fixed quantization parameter (QP). The frame coding complexity is then measured based on the resulting bits and peak signal-to-ratio (PSNR) in the pre-coding stage. Finally, a bit budgetis calculated for current frame according to its coding complexity and inter-frame PSNR fluctuation, combined with the buffer status. Simulation results show that, in comparison with the H.264 adopted rate control scheme, our method is more efficient to suppress the sharp PSNR drops caused by high motions and scene changes. The visual quality variations in a sequence are also relieved.
Institute of Scientific and Technical Information of China (English)
刘宏展; 纪越峰; 刘立人
2012-01-01
Based on the space optical communication link equation, it is given in detail that the signal-to-noise ratio (SNR) equations for the inter-satellite coherent optical communication receiving system (ISCOCRS) with different aberrations. With communication distance for 60000 km, transmission rate of 2 Gb/s binary phase shift keying (2PSK) homodyne geosynchronous orbit receiving system as an example, through numerical simulation, the effect of the tilt, defocusing, coma and astigmatism aberrations on the bit error ratio(BER) of the ISCOCRS is compared systematically. The simulation results show that effect of the tilt aberration is the BER is more seriously than the astigmatism when the different aberrations influence the ISCOCRS individually, at the same time, the different aberrations can be partly corrected by another when they effect mutually, Which leads to the lower BER. With the BER more than 10~6 as standard, through adjusting the tilt, the coma can be partly corrected when the normalized coma is more than 1. 00, and adjusting the defocusing can partly correct the astigmatism when the normalized astigmatism is more than 0. 53. Therefore, the aberrations' influence should be overlooked in the process ofdesigning the ISCOCRS. These results provide some theoretic basis for the ISCOCRS.%以空间链路方程为基础,详细推导了具有不同像差时的星间相干光通信接收系统信噪比表达式.以通信距离为60000 km、速率为2 Gb/s的2PSK零差同步轨道接收系统为例,通过数值仿真,全面比较了接收天线的倾斜、离焦、彗差和像散等像差对接收系统误码率(εBER)的影响.结果表明,不同像差单独作用时,倾斜像差的影响最大,象散的影响最小；不同像差相互作用时,它们中的某些能部分实现相互校正,从而降低误码率.以εBER≤10-6为比较标准,当彗差W31/λ≤1.00时,调整倾斜像差能实现它们之间的部分校正；当像散W22/λ≤0.53时,调整离焦能对
Error-rate performance analysis of opportunistic regenerative relaying
Tourki, Kamel
2011-09-01
In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.
Dispersion Monitoring techniques in High Bit-rate Optical Communication Systems
Institute of Scientific and Technical Information of China (English)
SANG Xin-zhu; YU Chong-xiu; ZHANG Qi; WANG Xu
2004-01-01
For the efficient dynamic dispersion compensation, it is essential to monitor the dispersion accurately. The existing main dispersion monitoring techniques in high bit- rate optical communication systems are presented as well as their operating principles and research progress. The advantages and disadvantages of these methods are analyzed and discussed.
Power consumption analysis of constant bit rate data transmission over 3G mobile wireless networks
DEFF Research Database (Denmark)
Wang, Le; Ukhanova, Ann; Belyaev, Evgeny
2011-01-01
This paper presents the analysis of the power consumption of data transmission with constant bit rate over 3G mobile wireless networks. Our work includes the description of the transition state machine in 3G networks, followed by the detailed energy consumption analysis and measurement results...
An Investigation on Advantages of Utilizing Adaptive Bit Rate for LEO Satellite Link Engineering
Directory of Open Access Journals (Sweden)
Mehdi Hosseini
2012-09-01
Full Text Available The paper aims to investigate the advantages of using adaptive bit rate in the communication link of a LEO satellite. While doing the study, it is assumed that there is a communication subsystem on-board responsible for gathering the information sent by a number of ground user terminals. The subsystem which operates based on Store-and-Forward (SAF Scenario, contains two communication links, one for receiving data from user terminals (store case, and the other for forwarding the stored data to an Earth station (forward case. In fact, the current work aims to improve the volume of the data forwarded to Earth station. To this end, the Forward case bit rate is varied adaptively, and then by analyzing the power budget in a practical condition, the improvement achieved is evaluated. The results specifically obtained for a sample LEO satellite shows that utilizing adaptive bit rate instead of fixed bit rate can increase the daily data exchange up to about 100%.
Achievable Rates for Four-Dimensional Coded Modulation with a Bit-Wise Receiver
Alvarado, Alex
2013-01-01
We study achievable rates for four-dimensional (4D) constellations for spectrally efficient optical systems based on a (suboptimal) bit-wise receiver. We show that PM-QPSK outperforms the best 4D constellation designed for uncoded transmission by approximately 1 dB. Numerical results using LDPC codes validate the analysis.
A 14-bit 200-MS/s time-interleaved ADC with sample-time error calibration
Institute of Scientific and Technical Information of China (English)
Zhang Yiwen; Chen Chixiao; Yu Bei; Ye Fan; Ren Junyan
2012-01-01
Sample-time error between channels degrades the resolution of time-interleaved analog-to-digital converters (TIADCs).A calibration method implemented in mixed circuits with low complexity and fast convergence is proposed in this paper.The algorithm for detecting sample-time error is based on correlation and widely applied to wide-sense stationary input signals.The detected sample-time error is corrected by a voltage-controlled sampling switch.The experimental result of a 2-channel 200-MS/s 14-bit TIADC shows that the signal-to-noise and distortion ratio improves by 19.1 dB,and the spurious-free dynamic range improves by 34.6 dB for a 70.12-MHz input after calibration.The calibration convergence time is about 20000 sampling intervals.
High speed and adaptable error correction for megabit/s rate quantum key distribution.
Dixon, A R; Sato, H
2014-12-02
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.
Directory of Open Access Journals (Sweden)
S. Chris Prema
2015-01-01
Full Text Available A rate request sequenced bit loading reallocation algorithm is proposed. The spectral holes detected by spectrum sensing (SS in cognitive radio (CR are used by secondary users. This algorithm is applicable to Discrete Multitone (DMT systems for secondary user reallocation. DMT systems support different modulation on different subchannels according to Signal-to-Noise Ratio (SNR. The maximum bits and power that can be allocated to each subband is determined depending on the channel state information (CSI and secondary user modulation scheme. The spectral holes or free subbands are allocated to secondary users depending on the user rate request and subchannel capacity. A comparison is done between random rate request and sequenced rate request of secondary user for subchannel allocation. Through simulations it is observed that with sequenced rate request higher spectral efficiency is achieved with reduced complexity.
A Contourlet-Based Embedded Image Coding Scheme on Low Bit-Rate
Song, Haohao; Yu, Songyu
Contourlet transform (CT) is a new image representation method, which can efficiently represent contours and textures in images. However, CT is a kind of overcomplete transform with a redundancy factor of 4/3. If it is applied to image compression straightforwardly, the encoding bit-rate may increase to meet a given distortion. This fact baffles the coding community to develop CT-based image compression techniques with satisfactory performance. In this paper, we analyze the distribution of significant contourlet coefficients in different subbands and propose a new contourlet-based embedded image coding (CEIC) scheme on low bit-rate. The well-known wavelet-based embedded image coding (WEIC) algorithms such as EZW, SPIHT and SPECK can be easily integrated into the proposed scheme by constructing a virtual low frequency subband, modifying the coding framework of WEIC algorithms according to the structure of contourlet coefficients, and adopting a high-efficiency significant coefficient scanning scheme for CEIC scheme. The proposed CEIC scheme can provide an embedded bit-stream, which is desirable in heterogeneous networks. Our experiments demonstrate that the proposed scheme can achieve the better compression performance on low bit-rate. Furthermore, thanks to the contourlet adopted in the proposed scheme, more contours and textures in the coded images are preserved to ensure the superior subjective quality.
Soury, Hamza
2012-06-01
This letter considers the average bit error probability of binary coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closed form expression in terms of the Fox\\'s H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading and Nakagami-m fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters. © 2012 IEEE.
Influence of the FEC Channel Coding on Error Rates and Picture Quality in DVB Baseband Transmission
Directory of Open Access Journals (Sweden)
T. Kratochvil
2006-09-01
Full Text Available The paper deals with the component analysis of DTV (Digital Television and DVB (Digital Video Broadcasting baseband channel coding. Used FEC (Forward Error Correction error-protection codes principles are shortly outlined and the simulation model applied in Matlab is presented. Results of achieved bit and symbol error rates and corresponding picture quality evaluation analysis are presented, including the evaluation of influence of the channel coding on transmitted RGB images and their noise rates related to MOS (Mean Opinion Score. Conclusion of the paper contains comparison of DVB channel codes efficiency.
Automatic network-adaptive ultra-low-bit-rate video coding
Chien, Wei-Jung; Lam, Tuyet-Trang; Abousleman, Glen P.; Karam, Lina J.
2006-05-01
This paper presents a software-only, real-time video coder/decoder (codec) for use with low-bandwidth channels where the bandwidth is unknown or varies with time. The codec incorporates a modified JPEG2000 core and interframe predictive coding, and can operate with network bandwidths of less than 1 kbits/second. The encoder and decoder establish two virtual connections over a single IP-based communications link. The first connection is UDP/IP guaranteed throughput, which is used to transmit the compressed video stream in real time, while the second is TCP/IP guaranteed delivery, which is used for two-way control and compression parameter updating. The TCP/IP link serves as a virtual feedback channel and enables the decoder to instruct the encoder to throttle back the transmission bit rate in response to the measured packet loss ratio. It also enables either side to initiate on-the-fly parameter updates such as bit rate, frame rate, frame size, and correlation parameter, among others. The codec also incorporates frame-rate throttling whereby the number of frames decoded is adjusted based upon the available processing resources. Thus, the proposed codec is capable of automatically adjusting the transmission bit rate and decoding frame rate to adapt to any network scenario. Video coding results for a variety of network bandwidths and configurations are presented to illustrate the vast capabilities of the proposed video coding system.
Energy Technology Data Exchange (ETDEWEB)
Doiron, H.H.; Deane, J.D.
1982-09-01
Effects of hydraulic cleaning parameter variations on rate of penetration response of 7 7/8 inch diameter soft formation insert bits have been measured in laboratory drilling tests. Tests were conducted in Mancos Shale rock samples at 700 psi and 4000 psi simulated overbalance pressure conditions using a 9.1 pound per gallon bentonite-barite water base drilling fluid. Bit hydraulic horsepower was varied from 0.72 to 9.5 HHP/in/sup 2/ using two or three nozzles in sizes ranging from 9/32 to 14/32 inches in diameter. Some improvements in ROP at constant bit hydraulic horsepower and impact force levels were obtained with two nozzle configurations vs. three nozzle configurations, but improvements were not consistently out of the range of normal test to test variations. Reduction in drilling costs due to the measured response of ROP to improved hydraulic cleaning is compared to increased operating costs required to provide additional hydraulics. Results indicate that bit hydraulic horsepower levels in excess of popular rules of thumb are cost effective in slow drilling due to high overbalance pressure.
Multiple Bit Error Tolerant Galois Field Architectures Over GF (2m
Directory of Open Access Journals (Sweden)
Mahesh Poolakkaparambil
2012-06-01
Full Text Available Radiation induced transient faults like single event upsets (SEU and multiple event upsets (MEU in memories are well researched. As a result of the technology scaling, it is observed that the logic blocks are also vulnerable to malfunctioning when they are deployed in radiation prone environment. However, the current literature is lacking efforts to mitigate such issues in the digital logic circuits when exposed to natural radiation prone environment or when they are subjected to malicious attacks by an eavesdropper using highly energized particles. This may lead to catastrophe in critical applications such as widely used cryptographic hardware. In this paper, novel dynamic error correction architectures, based on the BCH codes, is proposed for correcting multiple errors which makes the circuits robust against radiation induced faults irrespective of the location of the errors. As a benchmark test case, the finite field multiplier circuit is considered as the functional block which can be the target for major attacks. The proposed scheme has the capability to handle stuck-at faults that are also a major cause of failure affecting the overall yield of a nano-CMOS integrated chip. The experimental results show that the proposed dynamic error detection and correction architecture results in 50% reduction in critical path delay by dynamically bypassing the error correction logic when no error is present. The area overhead for the larger multiplier is within 150% which is 33% lower than the TMR and comparable to 130% overhead of single error correcting Hamming and LDPC based techniques.
Kartiwa, Iwa; Jung, Sang-Min; Hong, Moon-Ki; Han, Sang-Kook
2013-06-01
We experimentally demonstrate the use of millimeter-wave signal generation by optical carrier suppression (OCS) method using single-drive Mach-Zehnder modulator as a light sources seed for 20 Gb/s WDM-OFDM-PON in 20-km single fiber loopback transmission based on cost-effective RSOA modulation. Practical discrete rate adaptive bit loading algorithm was employed in this colorless ONU system to maximize the achievable bit rate for an average bit error rate (BER) below 2 × 10-3.
Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates
Linares, Irving (Inventor)
2016-01-01
Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.
3D video bit rate adaptation decision taking using ambient illumination context
Directory of Open Access Journals (Sweden)
G. Nur Yilmaz
2014-09-01
Full Text Available 3-Dimensional (3D video adaptation decision taking is an open field in which not many researchers have carried out investigations yet compared to 3D video display, coding, etc. Moreover, utilizing ambient illumination as an environmental context for 3D video adaptation decision taking has particularly not been studied in literature to date. In this paper, a user perception model, which is based on determining perception characteristics of a user for a 3D video content viewed under a particular ambient illumination condition, is proposed. Using the proposed model, a 3D video bit rate adaptation decision taking technique is developed to determine the adapted bit rate for the 3D video content to maintain 3D video quality perception by considering the ambient illumination condition changes. Experimental results demonstrate that the proposed technique is capable of exploiting the changes in ambient illumination level to use network resources more efficiently without sacrificing the 3D video quality perception.
Improved DCT-based image coding and decoding methods for low-bit-rate applications
Jung, Sung-Hwan; Mitra, Sanjit K.
1994-05-01
The discrete cosine transform (DCT) is well known for highly efficient coding performance, and it is widely used in many image compression applications. However, in low-bit rate coding, it produces undesirable block artifacts that are visually not pleasing. In addition, in many applications, faster compression and easier VLSI implementation of DCT coefficients are also important issues. The removal of the block artifacts and faster DCT computation are therefore of practical interest. In this paper, we outline a modified DCT computation scheme that provides a simple efficient solution to the reduction of the block artifacts while achieving faster computation. We also derive a similar solution for the efficient computation of the inverse DCT. We have applied the new approach for the low-bit rate coding and decoding of images. Initial simulation results on real images have verified the improved performance obtained using the proposed method over the standard JPEG method.
Very Low Bit-Rate Video Coding Using Motion ompensated 3-D Wavelet Transform
Institute of Scientific and Technical Information of China (English)
无
1999-01-01
A new motion-compensated 3-D wavelet transform (MC-3DWT) video coding scheme is presented in thispaper. The new coding scheme has a good performance in average PSNR, compression ratio and visual quality of re-constructions compared with the existing 3-D wavelet transform (3DWT) coding methods and motion-compensated2-D wavelet transform (MC-WT) coding method. The new MC-3DWT coding scheme is suitable for very low bit-rate video coding.
Extending the lifetime of a quantum bit with error correction in superconducting circuits
Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S. M.; Jiang, L.; Mirrahimi, Mazyar; Devoret, M. H.; Schoelkopf, R. J.
2016-08-01
Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The ‘break-even’ point of QEC—at which the lifetime of a qubit exceeds the lifetime of the constituents of the system—has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0>f and |1>f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.
Extending the lifetime of a quantum bit with error correction in superconducting circuits.
Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S M; Jiang, L; Mirrahimi, Mazyar; Devoret, M H; Schoelkopf, R J
2016-08-25
Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The 'break-even' point of QEC--at which the lifetime of a qubit exceeds the lifetime of the constituents of the system--has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0〉f and |1〉f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.
Consideration of Direct Bit-Rate Measuring Method based on Extracting Envelope Signal
Otani, Akihito; Tsuda, Yukio; Igawa, Koji; Shida, Katsunori
We previously developed an optical sampling oscilloscope (EDT-OSO) based on an envelope detection triggering method. This EDT-OSO can stably measure eye-diagram waveforms of signals exceeding 100Gbps without an external high-frequency clock signal. However far-end waveform measurements during a long distance place could not be realized. Because the EDT-OSO requires to link 10-MHz time bases in the EDT-OSO and a light under test (LUT) generator for synchrinizing. To overcom this drawbak, we developed a direct bit-rate measureing method for synchronizing both 10-MHz time bases vartually and a self-synchronized EDT-OSO (SSEDT-OSO) based on this method simulteniously. We confirmed that a bit-rate measurement repetability of the SSEDT-OSO was from 10-9 to 10-8 by evaluating a standard deviation and the SSEDT-OSO could measure an eye-diagram without linking 10-MHz time bases. This paper explains the basic principle for measuring the bit-rate of the LUT directly. Furthermore, we describe a configuration of the SSEDT-OSO and evalluation results.
On the average capacity and bit error probability of wireless communication systems
Yilmaz, Ferkan
2011-12-01
Analysis of the average binary error probabilities and average capacity of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function-based unified expression for both average binary error probabilities and average capacity of single and multiple link communication with maximal ratio combining. It is a matter to note that the generic unified expression offered in this paper can be easily calculated and that is applicable to a wide variety of fading scenarios, and the mathematical formalism is illustrated with the generalized Gamma fading distribution in order to validate the correctness of our newly derived results. © 2011 IEEE.
Error-associated behaviors and error rates for robotic geology
Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin
2004-01-01
This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.
Yilmaz, Ferkan
2014-04-01
The main idea in the moment generating function (MGF) approach is to alternatively express the conditional bit error probability (BEP) in a desired exponential form so that possibly multi-fold performance averaging is readily converted into a computationally efficient single-fold averaging - sometimes into a closed-form - by means of using the MGF of the signal-to-noise ratio. However, as presented in [1] and specifically indicated in [2] and also to the best of our knowledge, there does not exist an MGF-based approach in the literature to represent Wojnar\\'s generic BEP expression in a desired exponential form. This paper presents novel MGF-based expressions for calculating the average BEP of binary signalling over generalized fading channels, specifically by expressing Wojnar\\'s generic BEP expression in a desirable exponential form. We also propose MGF-based expressions to explore the amount of dispersion in the BEP for binary signalling over generalized fading channels.
Alpha coding of arbitrarily shaped objects for low-bit-rate MPEG-4
Hadar, Ofer; Folkman, Hagai
2001-11-01
This paper presents a new scheme for compact shape-coding which can reduce the needed bandwidth for low bit rate MPEG- 4 applications. Our scheme is based on a coarse representation of the alpha plane with a block size resolution of 8x8 pixels. This arrangement saves bandwidth and reduces the algorithm complexity (number of computations), as compared to the Content-based Arithmetic Encoding (CAE) algorithm. In our algorithm, we encode the alpha plane of a macroblock with only 4 bits, while we can further reduce the number of encoding bits by using the Huffman code. The encoding blocks are only contour macroblocks, transparent macroblocks are considered as background macroblocks, while opaque macroblocks are considered as object macroblocks. We show that the amount of bandwidth saving with representing the alpha-plane can reach a factor of 9.5. Such a scheme is appropriate for mobile applications where there is a lack of both bandwidth and processing power. We also speculate that our scheme will be compatible to the MPEG-4 standard.
Mitra, Sunanda; Yang, Shu Y.
1999-01-01
An adaptive vector quantizer (VQ) using a clustering technique known as adaptive fuzzy leader clustering (AFLC) that is similar in concept to deterministic annealing for VQ codebook design has been developed. This vector quantizer, AFLC-VQ, has been designed to vector quantize wavelet decomposed sub images with optimal bit allocation. The high- resolution sub images at each level have been statistically analyzed to conform to generalized Gaussian probability distributions by selecting the optimal number of filter taps. The adaptive characteristics of AFLC-VQ result from AFLC, an algorithm that uses self-organizing neural networks with fuzzy membership values of the input samples for upgrading the cluster centroids based on well known optimization criteria. By generating codebooks containing codewords of varying bits, AFLC-VQ is capable of compressing large color/monochrome medical images at extremely low bit rates (0.1 bpp and less) and yet yielding high fidelity reconstructed images. The quality of the reconstructed images formed by AFLC-VQ has been compared with JPEG and EZW, the standard and the well known wavelet based compression technique (using scalar quantization), respectively, in terms of statistical performance criteria as well as visual perception. AFLC-VQ exhibits much better performance than the above techniques. JPEG and EZW were chosen as comparative benchmarks since these have been used in radiographic image compression. The superior performance of AFLC-VQ over LBG-VQ has been reported in earlier papers.
Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure
2013-09-01
High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.
A forward error correction technique using a high-speed, high-rate single chip codec
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
1989-01-01
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
Power consumption analysis of constant bit rate video transmission over 3G networks
DEFF Research Database (Denmark)
Ukhanova, Ann; Belyaev, Evgeny; Wang, Le
2012-01-01
for the 3GPP transition state machine that allows to decrease power consumption on a mobile device taking signaling traffic, buffer size and latency restrictions into account. Furthermore, we discuss the gain in power consumption vs. PSNR for transmitted video and show the possibility of performing power......This paper presents an analysis of the power consumption of video data transmission with constant bit rate over 3G mobile wireless networks. The work includes the description of the radio resource control transition state machine in 3G networks, followed by a detailed power consumption analysis...... consumption management based on the requirements for the video quality....
Monitoring Error Rates In Illumina Sequencing
Manley, Leigh J.; Ma, Duanduan; Levine, Stuart S.
2016-01-01
Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR’s unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted. PMID:27672352
An Improved Rate Matching Method for DVB Systems Through Pilot Bit Insertion
Directory of Open Access Journals (Sweden)
Seyed Mohammad-Sajad Sadough
2012-09-01
Full Text Available Classically, obtaining different coding rates in turbo codes is achieved through the well known puncturing procedure. However, puncturing is a critical procedure since the way the encoded sequence is punctured influences directly the decoding performance. In this work, we propose to mix the data sequence at the turbo encoder input inside the Digital Video Broadcasting (DVB standard with some pilot (perfectly known bits. By using variable pilot insertion rates, we achieve different coding rates with more flexibility. The proposed scheme is able to use a less complex mother code compared to that used in a conventional punctured turbo code. We also analyze the effect of different type of pilot insertion such as random and periodic schemes. Simulation results provided in the context of DVB show that in addition to providing flexible encoder design and reducing the encoder complexity, pilot insertion can improve slightly the performance of turbo decoders, compared to a conventional punctured turbo code.
Bit Rate Maximising Per-Tone Equalisation with Adaptive Implementation for DMT-Based Systems
Directory of Open Access Journals (Sweden)
Suchada Sitjongsataporn
2009-01-01
Full Text Available We present a bit rate maximising per-tone equalisation (BM-PTEQ cost function that is based on an exact subchannel SNR as a function of per-tone equaliser in discrete multitone (DMT systems. We then introduce the proposed BM-PTEQ criterion whose derivation for solution is shown to inherit from the methodology of the existing bit rate maximising time-domain equalisation (BM-TEQ. By solving a nonlinear BM-PTEQ cost function, an adaptive BM-PTEQ approach based on a recursive Levenberg-Marquardt (RLM algorithm is presented with the adaptive inverse square-root (iQR algorithm for DMT-based systems. Simulation results confirm that the performance of the proposed adaptive iQR RLM-based BM-PTEQ converges close to the performance of the proposed BM-PTEQ. Moreover, the performance of both these proposed BM-PTEQ algorithms is improved as compared with the BM-TEQ.
Speech Compression of Thai Dialects with Low-Bit-Rate Speech Coders
Directory of Open Access Journals (Sweden)
Suphattharachai Chomphan
2012-01-01
Full Text Available Problem statement: In modern speech communication at low bit rate, speech coding deteriorates the characteristics of the coded speech significantly. Considering the dialects in Thai, the coding quality of four main dialects spoken by Thai people residing in four core region including central, north, northeast and south regions has not been studied. Approach: This study presents a comparative study of the coding quality of four main Thai dialects by using different low-bit-rate speech coders including the Conjugate Structure Algebraic Code Excited Linear Predictive (CS-ACELP coder and the Multi-Pulse based Code Excited Linear Predictive (MP-CELP coder. The objective and subjective tests have been conducted to evaluate the coding quality of four main dialects. Results: From the experimental results, both tests show that the coding quality of North dialect is highest, meanwhile the coding quality of Northeast dialect is lowest. Moreover, the coding quality of male speech is mostly higher than that of female speech. Conclusion: From the study, it can be obviously seen that the coding quality of all Thai dialects are different.
Error rate performance of Hybrid QAM-FSK in OFDM systems exhibiting low PAPR
Institute of Scientific and Technical Information of China (English)
LATIF Asma; GOHAR Nasir D.
2009-01-01
Multicarrier transmission systems like orthogonal frequency division multiplexing (OFDM) support high data rate and generally require no equalization at the receiver, making them simple and efficient. This paper studies the design and performance analysis of a hybrid modulation system derived from multi-frequency and MQAM signals, employed in OFDM. This modulation scheme has better bit error rate (BER) performance and exhibits low PAPR. The proposed hybrid modulator reduces PAPR while keep-ing the OFDM transceiver design simple, as it does not require any side information or a little side Information (only one bit) to be sent and is efficient for arbitrary number of subcarriers. The results of the implementations are compared with those of conventional OFDM system.
Logical error rate in the Pauli twirling approximation.
Katabarwa, Amara; Geller, Michael R
2015-09-30
The performance of error correction protocols are necessary for understanding the operation of potential quantum computers, but this requires physical error models that can be simulated efficiently with classical computers. The Gottesmann-Knill theorem guarantees a class of such error models. Of these, one of the simplest is the Pauli twirling approximation (PTA), which is obtained by twirling an arbitrary completely positive error channel over the Pauli basis, resulting in a Pauli channel. In this work, we test the PTA's accuracy at predicting the logical error rate by simulating the 5-qubit code using a 9-qubit circuit with realistic decoherence and unitary gate errors. We find evidence for good agreement with exact simulation, with the PTA overestimating the logical error rate by a factor of 2 to 3. Our results suggest that the PTA is a reliable predictor of the logical error rate, at least for low-distance codes.
A very low bit rate video coder based on vector quantization.
Corte-Real, L; Alves, A P
1996-01-01
Describes a video coder based on a hybrid DPCM-vector quantization algorithm that is suited for bit rates ranging from 8-16 kb/s. The proposed approach involves segmenting difference images into variable-size and variable-shape blocks and performing segmentation and motion compensation simultaneously. The purpose of obtaining motion vectors for variable-size and variable-shape blocks is to improve the quality of motion estimation, particularly in those areas where the edges of moving objects are situated. For the larger blocks, decimation takes place in order to simplify vector quantization. For very active blocks, which are always of small dimension, a specific vector quantizer has been applied, the fuzzy classified vector quantizer (FCVQ). The coding algorithm described displays good performance in the compression of test sequences at the rates of 8 and 16 kb/s; the signal-to-noise ratios obtained are good in both cases. The complexity of the coder implementation is comparable to that of conventional hybrid coders, while the decoder is much simpler in this proposal.
A SPEAKER ADAPTABLE VERY LOW BIT RATE SPEECHCODER BASED ON HMM
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
This paper presented a speaker adaptable very low bit rate speech coder based on HMM (Hidden Markov Model) which includes the dynamic features, i.e. , delta and delta-delta parameters of speech. The performance of this speech coder has been improved by using the dynamic features generated by an algorithm for speech parameter generation from HMM because the generated speech parameter vectors reflect not only the means of static and dynamic feature vectors but also the covariance of those. The encoder part is equivalent to an HMM-based phoneme recognizer and transmits phoneme indexes, state durations, pitch information and speaker characteristics adaptation vectors to the decoder. The decoder receives those messages and concatenates phoneme HMM sequence according to the phoneme indexes. Then the decoder generates a sequence of mel-cepstral coefficient vectors using HMM-based speech parameter generation technique. Finally the decoder synthesizes speech by directly exciting the MLSA(Mel Log Spectrum Approximation) filter with the generated mel-cepstral coefficient vectors, according to the pitch information.
DEFF Research Database (Denmark)
Vaa, Michael; Mikkelsen, Benny; Jepsen, Kim Stokholm;
1996-01-01
A novel bit-rate flexible and very power efficient all-optical demultiplexer using differential optical control of a monolithically integrated Michelson interferometer with MQW SOAs is demonstrated at 40 to 10 Gbit/s. Gain switched DFB lasers provide ultra stable data and control signals....
Error rate information in attention allocation pilot models
Faulkner, W. H.; Onstott, E. D.
1977-01-01
The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.
Adaptive Power and Bit Allocation in Multicarrier Systems
Institute of Scientific and Technical Information of China (English)
HUO Yong-qing; PENG Qi-cong; SHAO Huai-zong
2007-01-01
We present two adaptive power and bit allocation algorithms for multicarrier systems in a frequency selective fading environment. One algorithm allocates bit based on maximizing the channel capacity, another allocates bit based on minimizing the bit-error-rate(BER). Two algorithms allocate power based on minimizing the BER. Results show that the proposed algorithms are more effective than Fischer's algorithm at low average signal-to-noise ration (SNR). This indicates that our algorithms can achieve high spectral efficiency and high communication reliability during bad channel state. Results also denote the bit and power allocation of each algorithm and effects of the number of subcarriers on the BER performance.
Total Dose Effects on Error Rates in Linear Bipolar Systems
Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent
2007-01-01
The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.
Forecasting the Euro exchange rate using vector error correction models
Aarle, B. van; Bos, M.; Hlouskova, J.
2000-01-01
Forecasting the Euro Exchange Rate Using Vector Error Correction Models. — This paper presents an exchange rate model for the Euro exchange rates of four major currencies, namely the US dollar, the British pound, the Japanese yen and the Swiss franc. The model is based on the monetary approach of ex
Error-rate performance analysis of incremental decode-and-forward opportunistic relaying
Tourki, Kamel
2011-06-01
In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.
A global rate-distortion optimized approach for H.26L low bit rate robust video over the Internet
Institute of Scientific and Technical Information of China (English)
Yang Hua; Yu Songyu; Yang Songan
2005-01-01
In recent years, more and more applications of video communication over the Internet have been extended, so the demand for reliable transmission of compressed video in a packet loss environment is ever increasing. Rate-Distortion optimized mode selection is a fundamental problem of video communication over packet-switched networks, but the classical R-D method only considers quantization distortion in the source and hence it cannot achieve global optimality. Here we introduce a new global R-D optimal Macro-Block coding mode decision scheme for the new H.26L video compression standard. Based on the Internet packet loss model of Bernoulli and Gilbert, this R-D mode decision approach can result in better error robustness than classical method. Furthermore, our experimental results also demonstrate its superior adaptive error resilience and feasibility.
Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael
2010-01-01
We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.
Loyka, Sergey; Gagnon, Francois
2009-01-01
Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...
Pegueroles, Josep R.; Alins, Juan J.; de la Cruz, Luis J.; Mata, Jorge
2001-07-01
MPEG family codecs generate variable-bit-rate (VBR) compressed video with significant multiple-time-scale bit rate variability. Smoothing techniques remove the periodic fluctuations generated by the codification modes. However, global efficiency concerning network resource allocation remains low due to scene-time-scale variability. RCBR techniques provide suitable means to achieving higher efficiency. Among all RCBR techniques described in literature, 2RCBR mechanism seems to be especially suitable for video-on demand. The method takes advantage of the knowledge of the stored video to calculate the renegotiation intervals and of the client buffer memory to perform work-ahead buffering techniques. 2RCBR achieves 100% bandwidth global efficiency with only two renegotiation levels. The algorithm is based on the study of the second derivative of the cumulative video sequence to find out sharp-sloped inflection points that point out changes in the scene complexity. Due to its nature, 2RCBR becomes very adequate to deliver MPEG2 scalable sequences into the network cause it can assure a constant bit rate to the base MPEG2 layer and use the higher rate intervals to deliver the enhanced MPEG2 layer. However, slight changes in the algorithm parameters must be introduced to attain an optimal behavior. This is verified by means of simulations on MPEG2 video patterns.
Makouei, Somayeh; Koozekanani, Z. D.
2014-12-01
In this paper, with sophisticated modification on modal-field distribution and introducing new design procedure, the single-mode fiber with ultra-low bending-loss and pseudo-symmetric high bit-rate of uplink and downlink, appropriate for fiber-to-the-home (FTTH) operation is presented. The bending-loss reduction and dispersion management are done by the means of Genetic Algorithm. The remarkable feature of this methodology is designing a bend-insensitive fiber without reduction of core radius and MFD. Simulation results show bending loss of 1.27×10-2 dB/turn at 1.55 μm for 5 mm curvature radius. The MFD and Aeff are 9.03 μm and 59.11 μm2. Moreover, the upstream and downstream bit-rates are approximately 2.38 Gbit/s-km and 3.05 Gbit/s-km.
Moretti, M.; Janssen, G.J.M.
2000-01-01
The transmission modulation system minimizes the wasted 'out of band' power. The digital data (1) to be transmitted is fed via a pulse response filter (2) to a mixer (4) where it modulates a carrier wave (4). The digital data is also fed via a delay circuit (5) and identical filter (6) to a second m
Individual Differences and Rating Errors in First Impressions of Psychopathy
Directory of Open Access Journals (Sweden)
Christopher T. A. Gillen
2016-10-01
Full Text Available The current study is the first to investigate whether individual differences in personality are related to improved first impression accuracy when appraising psychopathy in female offenders from thin-slices of information. The study also investigated the types of errors laypeople make when forming these judgments. Sixty-seven undergraduates assessed 22 offenders on their level of psychopathy, violence, likability, and attractiveness. Psychopathy rating accuracy improved as rater extroversion-sociability and agreeableness increased and when neuroticism and lifestyle and antisocial characteristics decreased. These results suggest that traits associated with nonverbal rating accuracy or social functioning may be important in threat detection. Raters also made errors consistent with error management theory, suggesting that laypeople overappraise danger when rating psychopathy.
Celandroni, Nedo; Ferro, Erina; Mihal, Vlado; Potort?, Francesco
1992-01-01
This report describes the FODA system working at variable coding and bit rates (FODA/IBEA-TDMA) FODA/IBEA is the natural evolution of the FODA-TDMA satellite access scheme working at 2 Mbit/s fixed rate with data 1/2 coded or uncoded. FODA-TDMA was used in the European SATINE-II experiment [8]. We remind here that the term FODA/IBEA system is comprehensive of the FODA/IBEA-TDMA (1) satellite access scheme and of the hardware prototype realised by the Marconi R.C. (U.K.). Both of them come fro...
The 95% confidence intervals of error rates and discriminant coefficients
Directory of Open Access Journals (Sweden)
Shuichi Shinmura
2015-02-01
Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.
Controlling the Type I Error Rate in Stepwise Regression Analysis.
Pohlmann, John T.
Three procedures used to control Type I error rate in stepwise regression analysis are forward selection, backward elimination, and true stepwise. In the forward selection method, a model of the dependent variable is formed by choosing the single best predictor; then the second predictor which makes the strongest contribution to the prediction of…
Assessment of salivary flow rate: biologic variation and measure error.
Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.
2004-01-01
OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated measurem
DNA barcoding: error rates based on comprehensive sampling.
Directory of Open Access Journals (Sweden)
Christopher P Meyer
2005-12-01
Full Text Available DNA barcoding has attracted attention with promises to aid in species identification and discovery; however, few well-sampled datasets are available to test its performance. We provide the first examination of barcoding performance in a comprehensively sampled, diverse group (cypraeid marine gastropods, or cowries. We utilize previous methods for testing performance and employ a novel phylogenetic approach to calculate intraspecific variation and interspecific divergence. Error rates are estimated for (1 identifying samples against a well-characterized phylogeny, and (2 assisting in species discovery for partially known groups. We find that the lowest overall error for species identification is 4%. In contrast, barcoding performs poorly in incompletely sampled groups. Here, species delineation relies on the use of thresholds, set to differentiate between intraspecific variation and interspecific divergence. Whereas proponents envision a "barcoding gap" between the two, we find substantial overlap, leading to minimal error rates of approximately 17% in cowries. Moreover, error rates double if only traditionally recognized species are analyzed. Thus, DNA barcoding holds promise for identification in taxonomically well-understood and thoroughly sampled clades. However, the use of thresholds does not bode well for delineating closely related species in taxonomically understudied groups. The promise of barcoding will be realized only if based on solid taxonomic foundations.
Gao, Ya; Sun, Junqiang; Sima, Chaotan
2016-10-01
We propose an all-optical approach for simultaneous high bit-rate return-to-zero (RZ) to non-return-to-zero (NRZ) format and LP01 to LP11 mode conversion using a weakly tilted apodized few-mode fiber Bragg grating (TA-FM-FBG) with specific linear spectral response. The grating apodization profile is designed by utilizing an efficient inverse scattering algorithm and the maximum refractive index modulation is adjusted based on the grating tilt angle, according to Coupled-Mode Theory. The temporal performance and operation bandwidth of the converter are discussed. The approach provides potential favorable device for the connection of various communication systems.
DEFF Research Database (Denmark)
Diez, S.; Mecozzi, A.; Mørk, Jesper
1999-01-01
We investigate the saturation properties of four-wave mixing of short optical pulses in a semiconductor optical amplifier. By varying the gain of the optical amplifier, we find a strong dependence of both conversion efficiency and signal-to-background ratio on pulse width and bit rate....... In particular, the signal-to-background ratio can be optimized for a specific amplifier gain. This behavior, which is coherently described in experiment and theory, is attributed to the dynamics of the amplified spontaneous emission, which is the main source of noise in a semiconductor optical amplifier....
All-optical wavelength conversion at bit rates above 10 Gb/s using semiconductor optical amplifiers
DEFF Research Database (Denmark)
Jørgensen, Carsten; Danielsen, Søren Lykke; Stubkjær, Kristian
1997-01-01
This work assesses the prospects for high-speed all-optical wavelength conversion using the simple optical interaction with the gain in semiconductor optical amplifiers (SOAs) via the interband carrier recombination. Operation and design guidelines for conversion speeds above 10 Gb/s are described...... and the various tradeoffs are discussed. Experiments at bit rates up to 40 Gb/s are presented for both cross-gain modulation (XGM) and cross-phase modulation (XPM) in SOAs demonstrating the high-speed capability of these techniques...
Energy Technology Data Exchange (ETDEWEB)
Breuze, G.; Fanet, H.; Serre, J. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Colas, D.; Garnero, E.; Hamet, T. [Electricite de France (EDF), 77 - Ecuelles (France)
1993-12-31
Fiber optics data transmission from numerous multiplexed sensors, is potentially attractive for nuclear plant applications. Multimode silica fiber behaviour during steady state gamma ray exposure is studied as a joint programme between LETI CE/SACLAY and EDF Renardieres: transmitted optical power and bit error rate have been measured on a 100 m optical fiber.
Individual Differences and Rating Errors in First Impressions of Psychopathy
Christopher T. A. Gillen; Henriette Bergstrøm; Forth, Adelle E.
2016-01-01
The current study is the first to investigate whether individual differences in personality are related to improved first impression accuracy when appraising psychopathy in female offenders from thin-slices of information. The study also investigated the types of errors laypeople make when forming these judgments. Sixty-seven undergraduates assessed 22 offenders on their level of psychopathy, violence, likability, and attractiveness. Psychopathy rating accuracy improved as rater extroversion-...
Pulse shaping for all-optical signal processing of ultra-high bit rate serial data signals
DEFF Research Database (Denmark)
Palushani, Evarist
) between dispersed OTDM data and linearly chirped pump pulses. This resulted in spectral compression, enabling the OTDM tributaries to be converted directly onto a dense wavelength division multiplexing (DWDM) grid. The serial-to-parallel conversion was successfully demonstrated for up to 640-GBd OTDM......The following thesis concerns pulse shaping and optical waveform manipulation for all-optical signal processing of ultra-high bit rate serial data signals, including generation of optical pulses in the femtosecond regime, serial-to-parallel conversion and terabaud coherent optical time division...... record-high serial data rates on a single-wavelength channel. The experimental results demonstrate 5.1- and 10.2-Tbit/s OTDM data signals achieved by 16-ary quadrature amplitude modulation (16-QAM), polarization multiplexing and symbol rates as high as 640 GBd and 1.28 TBd. These signal were transmitted...
Directory of Open Access Journals (Sweden)
Balakrishna Konda
2012-11-01
Full Text Available Traditional Serial-Serial multiplier addresses the high data sampling rate. It is effectively considered as the entire partial product matrix with n data sampling cycle for n×n multiplication function instead of 2n cycles in the conventional multipliers. This multiplication of partial products by considering two series inputs among which one is starting from LSB the other from MSB. Using this feed sequence and accumulation technique it takes only n cycle to complete the partial products. It achieves high bit sampling rate by replacing conventional full adder and highest 5:3 counters. Here asynchronous 1’s counter is presented. This counter takes critical path is limited to only an AND gate and D flip-flops. Accumulation is integral part of serial multiplier design. 1’s counter is used to count the number of ones at the end of the nth iteration in each counter produces. The implemented multipliers consist of a serial-serial data accumulator module and carry save adder that occupies less silicon area than the full carry save adder. In this paper we implemented model address for the 8bit 2’s complement implementing the Baugh-wooley algorithm and unsigned multiplication implementing the architecture for 8×8 Serial-Serial unsigned multiplication.
Two-Bit Bit Flipping Decoding of LDPC Codes
Nguyen, Dung Viet; Marcellin, Michael W
2011-01-01
In this paper, we propose a new class of bit flipping algorithms for low-density parity-check (LDPC) codes over the binary symmetric channel (BSC). Compared to the regular (parallel or serial) bit flipping algorithms, the proposed algorithms employ one additional bit at a variable node to represent its "strength." The introduction of this additional bit increases the guaranteed error correction capability by a factor of at least 2. An additional bit can also be employed at a check node to capture information which is beneficial to decoding. A framework for failure analysis of the proposed algorithms is described. These algorithms outperform the Gallager A/B algorithm and the min-sum algorithm at much lower complexity. Concatenation of two-bit bit flipping algorithms show a potential to approach the performance of belief propagation (BP) decoding in the error floor region, also at lower complexity.
CREME96 and Related Error Rate Prediction Methods
Adams, James H., Jr.
2012-01-01
Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and
Yilmaz, Ferkan
2012-07-01
Analysis of the average binary error probabilities (ABEP) and average capacity (AC) of wireless communications systems over generalized fading channels have been considered separately in past years. This paper introduces a novel moment generating function (MGF)-based unified expression for the ABEP and AC of single and multiple link communications with maximal ratio combining. In addition, this paper proposes the hyper-Fox\\'s H fading model as a unified fading distribution of a majority of the well-known generalized fading environments. As such, the authors offer a generic unified performance expression that can be easily calculated, and that is applicable to a wide variety of fading scenarios. The mathematical formulism is illustrated with some selected numerical examples that validate the correctness of the authors\\' newly derived results. © 1972-2012 IEEE.
Bit-padding information guided channel hopping
Yang, Yuli
2011-02-01
In the context of multiple-input multiple-output (MIMO) communications, we propose a bit-padding information guided channel hopping (BP-IGCH) scheme which breaks the limitation that the number of transmit antennas has to be a power of two based on the IGCH concept. The proposed scheme prescribes different bit-lengths to be mapped onto the indices of the transmit antennas and then uses padding technique to avoid error propagation. Numerical results and comparisons, on both the capacity and the bit error rate performances, are provided and show the advantage of the proposed scheme. The BP-IGCH scheme not only offers lower complexity to realize the design flexibility, but also achieves better performance. © 2011 IEEE.
Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access
Zafar, Ammar
2012-12-29
In this paper, we present an optimal resource allocation scheme (ORA) for an all-participate(AP) cognitive relay network that minimizes the symbol error rate (SER). The SER is derived and different constraints are considered on the system. We consider the cases of both individual and global power constraints, individual constraints only and global constraints only. Numerical results show that the ORA scheme outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).
Hikita, M; Takubo, C; Asai, K
2000-01-01
New surface acoustic wave (SAW) convolver structures with high conversion efficiency and self-temperature compensation characteristics have been developed. Strong piezoelectric substrates, regardless of temperature coefficients of delay (TCD), can be used in these convolvers. New demodulation techniques using the developed SAW convolver for high bit rate and wideband spread spectrum code division multiple access (CDMA) communications have also been developed. I- and Q-channel demodulation data can be derived directly from binary phase shift keying (BPSK) or quadri-phase shift keying (QPSK) CDMA signals. In an experiment using a 128 degrees YX-LiNbO(3) substrate, CDMA signals of 9 Mbps (megabits per second) with 60 Mcps (megachips per second) spread by 13-chip Barker code and 11 Mbps with 140 Mcps spread by 25-chip Shiba's code were clearly demodulated, demonstrating the effectiveness of these techniques for use in future CDMA communications.
Energy Technology Data Exchange (ETDEWEB)
TerraTek
2007-06-30
A deep drilling research program titled 'An Industry/DOE Program to Develop and Benchmark Advanced Diamond Product Drill Bits and HP/HT Drilling Fluids to Significantly Improve Rates of Penetration' was conducted at TerraTek's Drilling and Completions Laboratory. Drilling tests were run to simulate deep drilling by using high bore pressures and high confining and overburden stresses. The purpose of this testing was to gain insight into practices that would improve rates of penetration and mechanical specific energy while drilling under high pressure conditions. Thirty-seven test series were run utilizing a variety of drilling parameters which allowed analysis of the performance of drill bits and drilling fluids. Five different drill bit types or styles were tested: four-bladed polycrystalline diamond compact (PDC), 7-bladed PDC in regular and long profile, roller-cone, and impregnated. There were three different rock types used to simulate deep formations: Mancos shale, Carthage marble, and Crab Orchard sandstone. The testing also analyzed various drilling fluids and the extent to which they improved drilling. The PDC drill bits provided the best performance overall. The impregnated and tungsten carbide insert roller-cone drill bits performed poorly under the conditions chosen. The cesium formate drilling fluid outperformed all other drilling muds when drilling in the Carthage marble and Mancos shale with PDC drill bits. The oil base drilling fluid with manganese tetroxide weighting material provided the best performance when drilling the Crab Orchard sandstone.
Queiroz, Wamberto J. L.; Lopes, Waslon T. A.; Madeiro, Francisco; Alencar, Marcelo S.
2010-12-01
This paper presents an alternative method for determining exact expressions for the bit error probability (BEP) of modulation schemes subject to Nakagami-[InlineEquation not available: see fulltext.] fading. In this method, the Nakagami-[InlineEquation not available: see fulltext.] fading channel is seen as an additive noise channel whose noise is modeled as the ratio between Gaussian and Nakagami-[InlineEquation not available: see fulltext.] random variables. The method consists of using the cumulative density function of the resulting noise to obtain closed-form expressions for the BEP of modulation schemes subject to Nakagami-[InlineEquation not available: see fulltext.] fading. In particular, the proposed method is used to obtain closed-form expressions for the BEP of [InlineEquation not available: see fulltext.]-ary quadrature amplitude modulation ([InlineEquation not available: see fulltext.]-QAM), [InlineEquation not available: see fulltext.]-ary pulse amplitude modulation ([InlineEquation not available: see fulltext.]-PAM), and rectangular quadrature amplitude modulation ([InlineEquation not available: see fulltext.]-QAM) under Nakagami-[InlineEquation not available: see fulltext.] fading. The main contribution of this paper is to show that this alternative method can be used to reduce the computational complexity for detecting signals in the presence of fading.
Forensic watermarking and bit-rate conversion of partially encrypted AAC bitstreams
Lemma, Aweke; Katzenbeisser, Stefan; Celik, Mehmet U.; Kirbiz, S.
2008-02-01
Electronic Music Distribution (EMD) is undergoing two fundamental shifts. The delivery over wired broadband networks to personal computers is being replaced by delivery over heterogeneous wired and wireless networks, e.g. 3G and Wi-Fi, to a range of devices such as mobile phones, game consoles and in-car players. Moreover, restrictive DRM models bound to a limited set of devices are being replaced by flexible standards-based DRM schemes and increasingly forensic tracking technologies based on watermarking. Success of these EMD services will partially depend on scalable, low-complexity and bandwidth eficient content protection systems. In this context, we propose a new partial encryption scheme for Advanced Audio Coding (AAC) compressed audio which is particularly suitable for emerging EMD applications. The scheme encrypts only the scale-factor information in the AAC bitstream with an additive one-time-pad. This allows intermediate network nodes to transcode the bitstream to lower data rates without accessing the decryption keys, by increasing the scale-factor values and re-quantizing the corresponding spectral coeficients. Furthermore, the decryption key for each user is customized such that the decryption process imprints the audio with a unique forensic tracking watermark. This constitutes a secure, low-complexity watermark embedding process at the destination node, i.e. the player. As opposed to server-side embedding methods, the proposed scheme lowers the computational burden on servers and allows for network level bandwidth saving measures such as multi-casting and caching.
Gilchrist, N. H. C.
A draft of a new recommendation on low bit-rate digital audio coding for broadcasting is in preparation within CCIR Study Group 10. As part of this work, subjective tests are being conducted to determine the preferred coding systems to be used in the various applications, and at which bit rates they should be used. The BBC has been contributing to the work by conducting preliminary listening tests to select critical program material, and by preparing recordings using this material for use by the CCIR's testing centers.
Directory of Open Access Journals (Sweden)
Kee-Chaing Chua
2005-02-01
Full Text Available An approximate analytical formulation of the resource allocation problem for handling variable bit rate multiclass services in a cellular round-robin carrier-hopping multirate multicarrier direct-sequence code-division multiple-access (MC-DS-CDMA system is presented. In this paper, all grade-of-service (GoS or quality-of-service (QoS requirements at the connection level, packet level, and link layer are satisfied simultaneously in the system, instead of being satisfied at the connection level or at the link layer only. The analytical formulation shows how the GoS/QoS in the different layers are intertwined across the layers. A novelty of this paper is that the outages in the subcarriers are minimized by spreading the subcarriers' signal-to-interference ratio evenly among all the subcarriers by using a dynamic round-robin carrier-hopping allocation scheme. A complete sharing (CS scheme with guard capacity is used for the resource sharing policy at the connection level based on the mean rates of the connections. Numerical results illustrate that significant gain in the system utilization is achieved through the joint coupling of connection/packet levels and link layer.
CLOSED-FORM ERROR RATES OF STBC SYSTEMS AND ITS PERFORMANCE ANALYSIS
Institute of Scientific and Technical Information of China (English)
Hu Xianbin; Gao Yuanyuan; Yi Xiaoxin
2006-01-01
The closed-form solutions for error rates of Space-Time Block Code (STBC) Multiple Phase Shift Keying (MPSK) systems are derived in this paper. With characteristic function based method and the partial integration based respectively, the exact expressions of error rates are obtained for (2,1) STBC with and without channel estimation error.Simulations show that the practical error rates accord with the theoretical ones, so closed-form error rates are accurate references for STBC performance evaluation. For the error of pilot assisted channel estimation, the performance of a (2,1)STBC system is deteriorated about 3dB.
Yang, Aiying; Li, Xiangming; Jiang, Tao
2012-04-23
Combination of overlapping pulse position modulation and pulse width modulation at the transmitter and grouped bit-flipping algorithm for low-density parity-check decoding at the receiver are proposed for visible Light Emitting Diode (LED) indoor communication system in this paper. The results demonstrate that, with the same Photodetector, the bit rate can be increased and the performance of the communication system can be improved by the scheme we proposed. Compared with the standard bit-flipping algorithm, the grouped bit-flipping algorithm can achieve more than 2.0 dB coding gain at bit error rate of 10-5. By optimizing the encoding of overlapping pulse position modulation and pulse width modulation symbol, the performance can be further improved. It is reasonably expected that the bit rate can be upgraded to 400 Mbit/s with a single available LED, thus transmission rate beyond 1 Gbit/s is foreseen by RGB LEDs.
Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation
Swift, G.
2002-01-01
JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.
Energy Technology Data Exchange (ETDEWEB)
Alan Black; Arnis Judzis
2003-10-01
This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2002 through September 2002. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit--fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. Accomplishments to date include the following: 4Q 2002--Project started; Industry Team was assembled; Kick-off meeting was held at DOE Morgantown; 1Q 2003--Engineering meeting was held at Hughes Christensen, The Woodlands Texas to prepare preliminary plans for development and testing and review equipment needs; Operators started sending information regarding their needs for deep drilling challenges and priorities for large-scale testing experimental matrix; Aramco joined the Industry Team as DEA 148 objectives paralleled the DOE project; 2Q 2003--Engineering and planning for high pressure drilling at TerraTek commenced; 3Q 2003--Continuation of engineering and design work for high pressure drilling at TerraTek; Baker Hughes INTEQ drilling Fluids and Hughes Christensen commence planning for Phase 1 testing--recommendations for bits and fluids.
Experimental demonstration of topological error correction.
Yao, Xing-Can; Wang, Tian-Xiong; Chen, Hao-Ze; Gao, Wei-Bo; Fowler, Austin G; Raussendorf, Robert; Chen, Zeng-Bing; Liu, Nai-Le; Lu, Chao-Yang; Deng, You-Jin; Chen, Yu-Ao; Pan, Jian-Wei
2012-02-22
Scalable quantum computing can be achieved only if quantum bits are manipulated in a fault-tolerant fashion. Topological error correction--a method that combines topological quantum computation with quantum error correction--has the highest known tolerable error rate for a local architecture. The technique makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the experimental demonstration of topological error correction with an eight-photon cluster state. We show that a correlation can be protected against a single error on any quantum bit. Also, when all quantum bits are simultaneously subjected to errors with equal probability, the effective error rate can be significantly reduced. Our work demonstrates the viability of topological error correction for fault-tolerant quantum information processing.
A FAST BIT-LOADING ALGORITHM FOR HIGH SPEED POWER LINE COMMUNICATIONS
Institute of Scientific and Technical Information of China (English)
Zhang Shengqing; Zhao Li; Zou Cairong
2012-01-01
Adaptive bit-loading is a key technology in high speed power line communications with the Orthogonal Frequency Division Multiplexing (OFDM) modulation technology.According to the real situation of the transmitting power spectrum limited in high speed power line communications,this paper explored the adaptive bit loading algorithm to maximize transmission bit number when transmitting power spectral density and bit error rate are not exceed upper limit.With the characteristics of the power line channel,first of all,it obtains the optimal bit loading algorithm,and then provides the improved algorithm to reduce the computational complexity.Based on the analysis and simulation,it offers a non-iterative bit allocation algorithm,and finally the simulation shows that this new algorithm can greatly reduce the computational complexity,and the actual bit allocation results close to optimal.
Mohammed, Usama S
2010-01-01
This paper proposes new scheme for efficient rate allocation in conjunction with reducing peak-to-average power ratio (PAPR) in orthogonal frequency-division multiplexing (OFDM) systems. Modification of the set partitioning in hierarchical trees (SPIHT) image coder is proposed to generate four different groups of bit-stream relative to its significances. The significant bits, the sign bits, the set bits and the refinement bits are transmitted in four different groups. The proposed method for reducing the PAPR utilizes twice the unequal error protection (UEP), using the Read-Solomon codes (RS), in conjunction with bit-rate allocation and selective interleaving to provide minimum PAPR. The output bit-stream from the source code (SPIHT) will be started by the most significant types of bits (first group of bits). The optimal unequal error protection (UEP) of the four groups is proposed based on the channel destortion. The proposed structure provides significant improvement in bit error rate (BER) performance. Per...
Simultaneous control of error rates in fMRI data analysis.
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-12-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain.
Roy, Urmimala; Register, Leonard F; Banerjee, Sanjay K
2016-01-01
Spin-transfer-torque random access memory (STT-RAM) is a promising candidate for the next-generation of random-access-memory due to improved scalability, read-write speeds and endurance. However, the write pulse duration must be long enough to ensure a low write error rate (WER), the probability that a bit will remain unswitched after the write pulse is turned off, in the presence of stochastic thermal effects. WERs on the scale of 10$^{-9}$ or lower are desired. Within a macrospin approximation, WERs can be calculated analytically using the Fokker-Planck method to this point and beyond. However, dynamic micromagnetic effects within the bit can affect and lead to faster switching. Such micromagnetic effects can be addressed via numerical solution of the stochastic Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation. However, determining WERs approaching 10$^{-9}$ would require well over 10$^{9}$ such independent simulations, which is infeasible. In this work, we explore calculation of WER using "rare event en...
Directory of Open Access Journals (Sweden)
Laurent Girin
2010-01-01
Full Text Available This paper presents a model-based method for coding the LSF parameters of LPC speech coders on a “long-term” basis, that is, beyond the usual 20–30 ms frame duration. The objective is to provide efficient LSF quantization for a speech coder with large delay but very- to ultra-low bit-rate (i.e., below 1 kb/s. To do this, speech is first segmented into voiced/unvoiced segments. A Discrete Cosine model of the time trajectory of the LSF vectors is then applied to each segment to capture the LSF interframe correlation over the whole segment. Bi-directional transformation from the model coefficients to a reduced set of LSF vectors enables both efficient “sparse” coding (using here multistage vector quantizers and the generation of interpolated LSF vectors at the decoder. The proposed method provides up to 50% gain in bit-rate over frame-by-frame quantization while preserving signal quality and competes favorably with 2D-transform coding for the lower range of tested bit rates. Moreover, the implicit time-interpolation nature of the long-term coding process provides this technique a high potential for use in speech synthesis systems.
Impact of translational error-induced and error-free misfolding on the rate of protein evolution.
Yang, Jian-Rong; Zhuang, Shi-Mei; Zhang, Jianzhi
2010-10-19
What determines the rate of protein evolution is a fundamental question in biology. Recent genomic studies revealed a surprisingly strong anticorrelation between the expression level of a protein and its rate of sequence evolution. This observation is currently explained by the translational robustness hypothesis in which the toxicity of translational error-induced protein misfolding selects for higher translational robustness of more abundant proteins, which constrains sequence evolution. However, the impact of error-free protein misfolding has not been evaluated. We estimate that a non-negligible fraction of misfolded proteins are error free and demonstrate by a molecular-level evolutionary simulation that selection against protein misfolding results in a greater reduction of error-free misfolding than error-induced misfolding. Thus, an overarching protein-misfolding-avoidance hypothesis that includes both sources of misfolding is superior to the translational robustness hypothesis. We show that misfolding-minimizing amino acids are preferentially used in highly abundant yeast proteins and that these residues are evolutionarily more conserved than other residues of the same proteins. These findings provide unambiguous support to the role of protein-misfolding-avoidance in determining the rate of protein sequence evolution.
Error Rates in Users of Automatic Face Recognition Software.
White, David; Dunn, James D; Schmid, Alexandra C; Kemp, Richard I
2015-01-01
In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated 'candidate lists' selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers-who use the system in their daily work-and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced "facial examiners" outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems-potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.
Medication Error Reporting Rate and its Barriers and Facilitators among Nurses
Directory of Open Access Journals (Sweden)
Snor Bayazidi
2012-11-01
Full Text Available Introduction: Medication errors are among the most prevalent medical errors leading to morbidity and mortality. Effective prevention of this type of errors depends on the presence of a well-organized reporting system. The purpose of this study was to explore medication error reporting rate and its barriers and facilitators among nurses in teaching hospitals of Urmia University of Medical Sciences (Iran.Methods: In a descriptive study in 2011, 733 nurses working in Urmia teaching hospitals were included. Data was collected using a questionnaire based on Haddon matrix. The questionnaire consisted of three items about medication error reporting rate, eight items on barriers of reporting, and seven items on facilitators of reporting. The collected data was analyzed by descriptive statistics in SPSS14.Results:The rate of reporting medication errors among nurses was far less than medication errors they had made. Nurses perceived that the most important barriers of reporting medication errors were blaming individuals instead of the system, consequences of reporting errors, and fear of reprimand and punishment. Some facilitating factors were also determined. Conclusion: Overall, the rate of medication errors was found to be much more than what had been reported by nurses. Therefore, it is suggested to train nurses and hospital administrators on facilitators and barriers of error reporting in order to enhance patient safety.
Hard Data on Soft Errors: A Large-Scale Assessment of Real-World Error Rates in GPGPU
Haque, Imran S
2009-01-01
Graphics processing units (GPUs) are gaining widespread use in computational chemistry and other scientific simulation contexts because of their huge performance advantages relative to conventional CPUs. However, the reliability of GPUs in error-intolerant applications is largely unproven. In particular, a lack of error checking and correcting (ECC) capability in the memory subsystems of graphics cards has been cited as a hindrance to the acceptance of GPUs as high-performance coprocessors, but the impact of this design has not been previously quantified. In this article we present MemtestG80, our software for assessing memory error rates on NVIDIA G80 and GT200-architecture-based graphics cards. Furthermore, we present the results of a large-scale assessment of GPU error rate, conducted by running MemtestG80 on over 20,000 hosts on the Folding@home distributed computing network. Our control experiments on consumer-grade and dedicated-GPGPU hardware in a controlled environment found no errors. However, our su...
On the Error Rate Analysis of Dual-Hop Amplify-and-Forward Relaying in Generalized-K Fading Channels
Directory of Open Access Journals (Sweden)
George P. Efthymoglou
2010-01-01
Full Text Available We present novel and easy-to-evaluate expressions for the error rate performance of cooperative dual-hop relaying with maximal ratio combining operating over independent generalized- fading channels. For this system, it is hard to obtain a closed-form expression for the moment generating function (MGF of the end-to-end signal-to-noise ratio (SNR at the destination, even for the case of a single dual-hop relay link. Therefore, we employ two different upper bound approximations for the output SNR, of which one is based on the minimum SNR of the two hops for each dual-hop relay link and the other is based on the geometric mean of the SNRs of the two hops. Lower bounds for the symbol and bit error rates for a variety of digital modulations can then be evaluated using the MGF-based approach. The final expressions are useful in the performance evaluation of amplify-and-forward relaying in a generalized composite radio environment.
Beneficial Effects of Population Bottlenecks in an RNA Virus Evolving at Increased Error Rate
Cases-González, Clara E.; Arribas, María; Domingo, Esteban; Lázaro, Ester
2008-01-01
RNA viruses replicate their genomes with a very high error rate and constitute highly heterogeneous mutant distributions similar to the molecular quasispecies introduced to explain the evolution of prebiotic replicators. The genetic information included in a quasispecies can only be faithfully transmitted below a critical error rate. When the error threshold is crossed, the population structure disorganizes, and it is substituted by a randomly distributed mutant spectrum. For viral quasispeci...
Birjandi, Parviz; Siyyari, Masood
2016-01-01
This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…
National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?
Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.
2010-01-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…
Conserved rates and patterns of transcription errors across bacterial growth states and lifestyles.
Traverse, Charles C; Ochman, Howard
2016-03-22
Errors that occur during transcription have received much less attention than the mutations that occur in DNA because transcription errors are not heritable and usually result in a very limited number of altered proteins. However, transcription error rates are typically several orders of magnitude higher than the mutation rate. Also, individual transcripts can be translated multiple times, so a single error can have substantial effects on the pool of proteins. Transcription errors can also contribute to cellular noise, thereby influencing cell survival under stressful conditions, such as starvation or antibiotic stress. Implementing a method that captures transcription errors genome-wide, we measured the rates and spectra of transcription errors in Escherichia coli and in endosymbionts for which mutation and/or substitution rates are greatly elevated over those of E. coli Under all tested conditions, across all species, and even for different categories of RNA sequences (mRNA and rRNAs), there were no significant differences in rates of transcription errors, which ranged from 2.3 × 10(-5) per nucleotide in mRNA of the endosymbiont Buchnera aphidicola to 5.2 × 10(-5) per nucleotide in rRNA of the endosymbiont Carsonella ruddii The similarity of transcription error rates in these bacterial endosymbionts to that in E. coli (4.63 × 10(-5) per nucleotide) is all the more surprising given that genomic erosion has resulted in the loss of transcription fidelity factors in both Buchnera and Carsonella.
Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt
2015-12-01
Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data.
Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E
2013-12-01
In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.
Error Resilient Video Compression Using Behavior Models
Directory of Open Access Journals (Sweden)
Jacco R. Taal
2004-03-01
Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.
Nickerson, Naomi H; Li, Ying; Benjamin, Simon C
2013-01-01
A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.
Error rates in forensic DNA analysis: Definition, numbers, impact and communication
Kloosterman, A.; Sjerps, M.; Quak, A.
2014-01-01
Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and pub
Video coding bit allocation algorithm over wireless transmission channel
Institute of Scientific and Technical Information of China (English)
ZHANG Wei; ZHOU Yuan-hua
2006-01-01
For two-way video communications over wireless channels using the automatic repeat request (ARQ) retransmission scheme, TMN8 rate control scheme is not effective in minimizing the number of frames skipped and cannot guarantee video quality during the retransmissions of error packets. This paper presents a joint source channel bit allocation scheme that allocates target bits according to encoder buffer fullness and estimation of channel condition by retransmission information. The results obtained from implementing our scheme in H. 263 + coder over wireless channel model show that our proposed scheme encodes the video sequences with lower and steadier buffer delay, fewer frames skipped and higher average PSNR compared to TMN8.
Guesmi, Latifa; Menif, Mourad
2016-08-01
In the context of carrying a wide variety of modulation formats and data rates for home networks, the study covers the radio-over-fiber (RoF) technology, where the need for an alternative way of management, automated fault diagnosis, and formats identification is expressed. Also, RoF signals in an optical link are impaired by various linear and nonlinear effects including chromatic dispersion, polarization mode dispersion, amplified spontaneous emission noise, and so on. Hence, for this purpose, we investigated the sampling method based on asynchronous delay-tap sampling in conjunction with a cross-correlation function for the joint bit rate/modulation format identification and optical performance monitoring. Three modulation formats with different data rates are used to demonstrate the validity of this technique, where the identification accuracy and the monitoring ranges reached high values.
Adjustable Nyquist-rate System for Single-Bit Sigma-Delta ADC with Alternative FIR Architecture
Frick, Vincent; Dadouche, Foudil; Berviller, Hervé
2016-09-01
This paper presents a new smart and compact system dedicated to control the output sampling frequency of an analogue-to-digital converters (ADC) based on single-bit sigma-delta (ΣΔ) modulator. This system dramatically improves the spectral analysis capabilities of power network analysers (power meters) by adjusting the ADC's sampling frequency to the input signal's fundamental frequency with a few parts per million accuracy. The trade-off between straightforwardness and performance that motivated the choice of the ADC's architecture are preliminary discussed. It particularly comes along with design considerations of an ultra-steep direct-form FIR that is optimised in terms of size and operating speed. Thanks to compact standard VHDL language description, the architecture of the proposed system is particularly suitable for application-specific integrated circuit (ASIC) implementation-oriented low-power and low-cost power meter applications. Field programmable gate array (FPGA) prototyping and experimental results validate the adjustable sampling frequency concept. They also show that the system can perform better in terms of implementation and power capabilities compared to dedicated IP resources.
Switching field distribution of exchange coupled ferri-/ferromagnetic composite bit patterned media
Oezelt, Harald; Fischbacher, Johann; Matthes, Patrick; Kirk, Eugenie; Wohlhüter, Phillip; Heyderman, Laura Jane; Albrecht, Manfred; Schrefl, Thomas
2016-01-01
We investigate the switching field distribution and the resulting bit error rate of exchange coupled ferri-/ferromagnetic bilayer island arrays by micromagnetic simulations. Using islands with varying microstructure and anisotropic properties, the intrinsic switching field distribution is computed. The dipolar contribution to the switching field distribution is obtained separately by using a model of a hexagonal island array resembling $1.4\\,\\mathrm{Tb/in}^2$ bit patterned media. Both contributions are computed for different thickness of the soft exchange coupled ferrimagnet and also for ferromagnetic single phase FePt islands. A bit patterned media with a bilayer structure of FeGd($5\\,\\mathrm{nm}$)/FePt($5\\,\\mathrm{nm}$) shows a bit error rate of $10^{-4}$ with a write field of $1.2\\,\\mathrm{T}$.
An error criterion for determining sampling rates in closed-loop control systems
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
Graphical algorithms and threshold error rates for the 2d colour code
Wang, D S; Hill, C D; Hollenberg, L C L
2009-01-01
Recent work on fault-tolerant quantum computation making use of topological error correction shows great potential, with the 2d surface code possessing a threshold error rate approaching 1% (NJoP 9:199, 2007), (arXiv:0905.0531). However, the 2d surface code requires the use of a complex state distillation procedure to achieve universal quantum computation. The colour code of (PRL 97:180501, 2006) is a related scheme partially solving the problem, providing a means to perform all Clifford group gates transversally. We review the colour code and its error correcting methodology, discussing one approximate technique based on graph matching. We derive an analytic lower bound to the threshold error rate of 6.25% under error-free syndrome extraction, while numerical simulations indicate it may be as high as 13.3%. Inclusion of faulty syndrome extraction circuits drops the threshold to approximately 0.1%.
Conjunction error rates on a continuous recognition memory test: little evidence for recollection.
Jones, Todd C; Atchley, Paul
2002-03-01
Two experiments examined conjunction memory errors on a continuous recognition task where the lag between parent words (e.g., blackmail, jailbird) and later conjunction lures (blackbird) was manipulated. In Experiment 1, contrary to expectations, the conjunction error rate was highest at the shortest lag (1 word) and decreased as the lag increased. In Experiment 2 the conjunction error rate increased significantly from a 0- to a 1-word lag, then decreased slightly from a 1- to a 5-word lag. The results provide mixed support for simple familiarity and dual-process accounts of recognition. Paradoxically, searching for an item in memory does not appear to be a good encoding task.
Analysis and Methodology Study of Bit Error Performance of FSO System%无线光通信系统误码性能分析及方法研究
Institute of Scientific and Technical Information of China (English)
贾科军; 赵延刚; 陈辉; 薛建彬; 王惠琴
2012-01-01
Bit error rate (BER) is an important evaluation index of free space optical communication (FSO), and how to exactly get the BER statistics is rather important. We study the technique of analyzing the BER of FSO based on Monte Carlo simulation using Matlab software. The FSO system and the principle of Monte Carlo simulation are introduced. The method of getting the BER statistics is researched, and the method of generating information source, the channel model, and the calculation of signal-to-noise ratio (SNR) parameter for simulation are given detailedly. Moreover, part of the core Matlab program is presented. The modeling and simulation based on the low-density parity check (LDPC) code and pulse position modulation (PPM) are implementted. The analysis results under different conditions of weather and signal-to-noise ratio indicate the this method is accurate and pratical.%误码率(BER)是衡量无线光通信系统设计优劣的重要指标,如何正确统计误码性能显得尤为重要.利用Matlab软件,研究基于蒙特卡罗仿真实现无线光通信系统误码性能分析的方法.介绍了无线光通信系统以及利用蒙特卡罗仿真进行性能估计的原理；研究了误码性能仿真方法,详细地给出了仿真中的信源产生方法、信道模型、信噪比参数计算方法、误码率统计方法等,并给出了部分核心Matlab程序；给出了基于低密度奇偶校验(LDPC)码、脉冲位置调制(PPM)的无线光通信系统仿真图,详细介绍了各仿真参数的设置,并在不同天气和不同信噪比条件下统计了系统性能.统计结果表明,此性能分析方法准确可行.
Energy Technology Data Exchange (ETDEWEB)
Siy, P.F.; Carter, J.T.; D' Addario, L.R.; Loeber, D.A.
1991-08-01
The MITRE Corporation has performed in-flux radiation testing of the Texas Instruments TMS320C30 32-bit floating point digital signal processor in both total dose and dose rate radiation environments. This test effort has provided data relating to the applicability of the TMS320C30 in systems with total dose and/or dose rate survivability requirements. In order to accomplish these tests, the MITRE Corporation developed custom hardware and software for in-flux radiation testing. This paper summarizes the effort by providing an overview of the TMS320C30, MITRE's test methodology, test facilities, statistical analysis, and full coverage of the test results. (Author)
Foreign Exchange Rate Futures Trends: Foreign Exchange Risk or Systematic Forecasting Errors?
Directory of Open Access Journals (Sweden)
Marcelo Cunha Medeiros
2006-12-01
Full Text Available The forward exchange rate is widely used in international finance whenever the analysis of the expected depreciation is needed. It is also used to identify currency risk premium. The difference between the spot rate and the forward rate is supposed to be a predictor of the future movements of the spot rate. This prediction is hardly precise. The fact that the forward rate is a biased predictor of the future change in the spot rate can be attributed to a currency risk premium. The bias can also be attributed to systematic errors of the future depreciation of the currency. This paper analyzes the nature of the risk premium and of the prediction errors in using the forward rate. It will look into the efficiency and rationality of the futures market in Brazil from April 1995 to December 1998, a period of controled exchange rates.
The effect of sampling on estimates of lexical specificity and error rates.
Rowland, Caroline F; Fletcher, Sarah L
2006-11-01
Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.
On zero-rate error exponent for BSC with noisy feedback
Burnashev, Marat V
2008-01-01
For the information transmission a binary symmetric channel is used. There is also another noisy binary symmetric channel (feedback channel), and the transmitter observes without delay all the outputs of the forward channel via that feedback channel. The transmission of a nonexponential number of messages (i.e. the transmission rate equals zero) is considered. The achievable decoding error exponent for such a combination of channels is investigated. It is shown that if the crossover probability of the feedback channel is less than a certain positive value, then the achievable error exponent is better than the similar error exponent of the no-feedback channel. The transmission method described and the corresponding lower bound for the error exponent can be strengthened, and also extended to the positive transmission rates.
Zollanvari, Amin; Genton, Marc G
2013-08-01
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Zollanvari, Amin
2013-05-24
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
39 fJ/bit On-Chip Identification ofWireless Sensors Based on Manufacturing Variation
Directory of Open Access Journals (Sweden)
Jonathan F. Bolus
2014-09-01
Full Text Available A 39 fJ/bit IC identification system based on FET mismatch is presented and implemented in a 130 nm CMOS process. ID bits are generated based on the ΔVT between identically drawn NMOS devices due to manufacturing variation, and the ID cell structure allows for the characterization of ID bit reliability by characterizing ΔVT . An addressing scheme is also presented that allows for reliable on-chip identification of ICs in the presence of unreliable ID bits. An example implementation is presented that can address 1000 unique ICs, composed of 31 ID bits and having an error rate less than 10-6, with up to 21 unreliable bits.
Leung, Debbie; Matthews, William; Ozols, Maris; Roy, Aidan
2010-01-01
It is known that the number of different classical messages which can be communicated with a single use of a classical channel with zero probability of decoding error can sometimes be increased by using entanglement shared between sender and receiver. It has been an open question to determine whether entanglement can ever offer an advantage in terms of the zero-error communication rates achievable in the limit of many channel uses. In this paper we show, by explicit examples, that entanglement can indeed increase asymptotic zero-error capacity. Interestingly, in our examples the quantum protocols are based on the root systems of the exceptional Lie groups E7 and E8.
Difference of soft error rates in SOI SRAM induced by various high energy ion species
Energy Technology Data Exchange (ETDEWEB)
Abo, Satoshi, E-mail: abo@cqst.osaka-u.ac.jp [Center for Quantum Science and Technology Under Extreme Conditions, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531 (Japan); Masuda, Naoyuki; Wakaya, Fujio; Lohner, Tivadar [Center for Quantum Science and Technology Under Extreme Conditions, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531 (Japan); Onoda, Shinobu; Makino, Takahiro; Hirao, Toshio; Ohshima, Takeshi [Semiconductor Analysis and Radiation Effects Group, Environment and Industrial Materials Research Division, Quantum Beam Science Directorate, Japan Atomic Energy Agency, 1233 Watanuki-machi, Takasaki, Gunma 370-1292 (Japan); Iwamatsu, Toshiaki; Oda, Hidekazu [Advanced Device Technology Department, Production and Technology Unit, Devices and Analysis Technology Division, Renesas Electronics Corporation, 751, Horiguchi, Hitachinaka, Ibaraki 312-8504 (Japan); Takai, Mikio [Center for Quantum Science and Technology Under Extreme Conditions, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531 (Japan)
2012-02-15
Soft error rates in silicon-on-insulator (SOI) static random access memories (SRAMs) with a technology node of 90 nm have been investigated by beryllium and carbon ion probes. The soft error rates induced by beryllium and carbon probes started to increase with probe energies of 5.0 and 8.5 MeV, in which probes slightly penetrated the over-layer, and were saturated with energies at and above 7.0 and 9.0 MeV, in which the generated charge in the SOI body was more than the critical charge. The soft error rates in the SOI SRAMs by various ion probes were also compared with the generated charge in the SOI body. The soft error rates induced by hydrogen and helium ion probes were 1-2 orders of magnitude lower than those by beryllium, carbon and oxygen ion probes. The soft error rates depend not only on the generated charge in the SOI body but also on the incident ion species.
Quantifying the Impact of Single Bit Flips on Floating Point Arithmetic
Energy Technology Data Exchange (ETDEWEB)
Elliott, James J [ORNL; Mueller, Frank [North Carolina State University; Stoyanov, Miroslav K [ORNL; Webster, Clayton G [ORNL
2013-08-01
In high-end computing, the collective surface area, smaller fabrication sizes, and increasing density of components have led to an increase in the number of observed bit flips. If mechanisms are not in place to detect them, such flips produce silent errors, i.e. the code returns a result that deviates from the desired solution by more than the allowed tolerance and the discrepancy cannot be distinguished from the standard numerical error associated with the algorithm. These phenomena are believed to occur more frequently in DRAM, but logic gates, arithmetic units, and other circuits are also susceptible to bit flips. Previous work has focused on algorithmic techniques for detecting and correcting bit flips in specific data structures, however, they suffer from lack of generality and often times cannot be implemented in heterogeneous computing environment. Our work takes a novel approach to this problem. We focus on quantifying the impact of a single bit flip on specific floating-point operations. We analyze the error induced by flipping specific bits in the most widely used IEEE floating-point representation in an architecture-agnostic manner, i.e., without requiring proprietary information such as bit flip rates and the vendor-specific circuit designs. We initially study dot products of vectors and demonstrate that not all bit flips create a large error and, more importantly, expected value of the relative magnitude of the error is very sensitive on the bit pattern of the binary representation of the exponent, which strongly depends on scaling. Our results are derived analytically and then verified experimentally with Monte Carlo sampling of random vectors. Furthermore, we consider the natural resilience properties of solvers based on the fixed point iteration and we demonstrate how the resilience of the Jacobi method for linear equations can be significantly improved by rescaling the associated matrix.
Positional information, in bits.
Dubuis, Julien O; Tkacik, Gasper; Wieschaus, Eric F; Gregor, Thomas; Bialek, William
2013-10-08
Cells in a developing embryo have no direct way of "measuring" their physical position. Through a variety of processes, however, the expression levels of multiple genes come to be correlated with position, and these expression levels thus form a code for "positional information." We show how to measure this information, in bits, using the gap genes in the Drosophila embryo as an example. Individual genes carry nearly two bits of information, twice as much as would be expected if the expression patterns consisted only of on/off domains separated by sharp boundaries. Taken together, four gap genes carry enough information to define a cell's location with an error bar of ~1 along the anterior/posterior axis of the embryo. This precision is nearly enough for each cell to have a unique identity, which is the maximum information the system can use, and is nearly constant along the length of the embryo. We argue that this constancy is a signature of optimality in the transmission of information from primary morphogen inputs to the output of the gap gene network.
Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors
Energy Technology Data Exchange (ETDEWEB)
Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)
2011-02-15
Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa
A novel multitemporal insar model for joint estimation of deformation rates and orbital errors
Zhang, Lei
2014-06-01
Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.
Estimation of the minimum mRNA splicing error rate in vertebrates.
Skandalis, A
2016-01-01
The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons.
Error rates in forensic DNA analysis: definition, numbers, impact and communication.
Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid
2014-09-01
Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed
A low-power 10-bit 250-KSPS cyclic ADC with offset and mismatch correction*
Institute of Scientific and Technical Information of China (English)
Zhao Hongliang; Zhao Yiqiang; Geng Junfeng; Li Peng; Zhang Zhisheng
2011-01-01
A low power 10-bit 250-k sample per second (KSPS) cyclic analog to digital converter (ADC) is presented. The ADC's offset errors are successfully cancelled out through the proper choice of a capacitor switching sequence. The improved redundant signed digit algorithm used in the ADC can tolerate high levels of the comparator's offset errors and switched capacitor mismatch errors. With this structure, it has the advantages of simple circuit configuration, small chip area and low power dissipation. The cyclic ADC manufactured with the Chartered 0.35 μm 2P4M process shows a 58.5 dB signal to noise and distortion ratio and a 9.4 bit effective number of bits at a 250 KSPS sample rate. It dissipates 0.72 mW with a 3.3 V power supply and occupies dimensions of 0.42 × 0.68 mm2.
Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies
Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.
2010-01-01
We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.
Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah
2016-01-01
Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in non-model organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown
Bányai, László; Patthy, László
2016-08-01
A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation.
Tissue pattern recognition error rates and tumor heterogeneity in gastric cancer.
Potts, Steven J; Huff, Sarah E; Lange, Holger; Zakharov, Vladislav; Eberhard, David A; Krueger, Joseph S; Hicks, David G; Young, George David; Johnson, Trevor; Whitney-Miller, Christa L
2013-01-01
The anatomic pathology discipline is slowly moving toward a digital workflow, where pathologists will evaluate whole-slide images on a computer monitor rather than glass slides through a microscope. One of the driving factors in this workflow is computer-assisted scoring, which depends on appropriate selection of regions of interest. With advances in tissue pattern recognition techniques, a more precise region of the tissue can be evaluated, no longer bound by the pathologist's patience in manually outlining target tissue areas. Pathologists use entire tissues from which to determine a score in a region of interest when making manual immunohistochemistry assessments. Tissue pattern recognition theoretically offers this same advantage; however, error rates exist in any tissue pattern recognition program, and these error rates contribute to errors in the overall score. To provide a real-world example of tissue pattern recognition, 11 HER2-stained upper gastrointestinal malignancies with high heterogeneity were evaluated. HER2 scoring of gastric cancer was chosen due to its increasing importance in gastrointestinal disease. A method is introduced for quantifying the error rates of tissue pattern recognition. The trade-off between fully sampling tumor with a given tissue pattern recognition error rate versus randomly sampling a limited number of fields of view with higher target accuracy was modeled with a Monte-Carlo simulation. Under most scenarios, stereological methods of sampling-limited fields of view outperformed whole-slide tissue pattern recognition approaches for accurate immunohistochemistry analysis. The importance of educating pathologists in the use of statistical sampling is discussed, along with the emerging role of hybrid whole-tissue imaging and stereological approaches.
Wright, Timothy J; Boot, Walter R; Morgan, Chelsea S
2013-09-01
Research on inattentional blindness (IB) has uncovered few individual difference measures that predict failures to detect an unexpected event. Notably, no clear relationship exists between primary task performance and IB. This is perplexing as better task performance is typically associated with increased effort and should result in fewer spare resources to process the unexpected event. We utilized a psychophysiological measure of effort (pupillary response) to explore whether differences in effort devoted to the primary task (multiple object tracking) are related to IB. Pupillary response was sensitive to tracking load and differences in primary task error rates. Furthermore, pupillary response was a better predictor of conscientiousness than primary task errors; errors were uncorrelated with conscientiousness. Despite being sensitive to task load, individual differences in performance and conscientiousness, pupillary response did not distinguish between those who noticed the unexpected event and those who did not. Results provide converging evidence that effort and primary task engagement may be unrelated to IB.
Torres, Jhon James Granada; Soto, Ana María Cárdenas; González, Neil Guerrero
2016-10-01
In the context of gridless optical multicarrier systems, we propose a method for intercarrier interference (ICI) mitigation which allows bit error correction in scenarios of nonspectral flatness between the subcarriers composing the multicarrier system and sub-Nyquist carrier spacing. We propose a hybrid ICI mitigation technique which exploits the advantages of signal equalization at both levels: the physical level for any digital and analog pulse shaping, and the bit-data level and its ability to incorporate advanced correcting codes. The concatenation of these two complementary techniques consists of a nondata-aided equalizer applied to each optical subcarrier, and a hard-decision forward error correction applied to the sequence of bits distributed along the optical subcarriers regardless of prior subchannel quality assessment as performed in orthogonal frequency-division multiplexing modulations for the implementation of the bit-loading technique. The impact of the ICI is systematically evaluated in terms of bit-error-rate as a function of the carrier frequency spacing and the roll-off factor of the digital pulse-shaping filter for a simulated 3×32-Gbaud single-polarization quadrature phase shift keying Nyquist-wavelength division multiplexing system. After the ICI mitigation, a back-to-back error-free decoding was obtained for sub-Nyquist carrier spacings of 28.5 and 30 GHz and roll-off values of 0.1 and 0.4, respectively.
Accurate and fast methods to estimate the population mutation rate from error prone sequences
Directory of Open Access Journals (Sweden)
Miyamoto Michael M
2009-08-01
Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.
Liu, Yun; Zhao, Shanghong; Gong, Zizheng; Zhao, Jing; Dong, Chen; Li, Xuan
2016-04-10
Displacement damage (DD) effect induced bit error ratio (BER) performance degradations in on-off keying (OOK), pulse position modulation (PPM), differential phase-shift keying (DPSK), and homodyne binary phase shift keying (BPSK) based systems were simulated and discussed under 1 MeV neutron irradiation to a total fluence of 1×1012 n/cm2 in this paper. Degradation of main optoelectronic devices included in communication systems were analyzed on the basis of existing experimental data. The system BER degradation was subsequently simulated and the variations of BER with different neutron irradiation location were also achieved. The result shows that DD on an Er-doped fiber amplifier (EDFA) is the dominant cause of system degradation, and a BPSK-based system performs better than the other three systems against DD. In order to improve radiation hardness of communication systems against DD, protection and enhancement of EDFA are required, and the use of a homodyne BPSK modulation scheme is a considered choice.
Zamarreno-Ramos, Carlos; Kulkarni, Raghavendra; Silva-Martinez, Jose; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2013-10-01
This paper presents a low power fast ON/OFF switchable voltage mode implementation of a driver/receiver pair intended to be used in high speed bit-serial Low Voltage Differential Signaling (LVDS) Address Event Representation (AER) chip grids, where short (like 32-bit) sparse data packages are transmitted. Voltage-Mode drivers require intrinsically half the power of their Current-Mode counterparts and do not require Common-Mode Voltage Control. However, for fast ON/OFF switching a special high-speed voltage regulator is required which needs to be kept ON during data pauses, and hence its power consumption must be minimized, resulting in tight design constraints. A proof-of-concept chip test prototype has been designed and fabricated in low-cost standard 0.35 μ m CMOS. At ± 500 mV voltage swing with 500 Mbps serial bit rate and 32 bit events, current consumption scales from 15.9 mA (7.7 mA for the driver and 8.2 mA for the receiver) at 10 Mevent/s rate to 406 μ A ( 343 μ A for the driver and 62.5 μA for the receiver) for an event rate below 10 Kevent/s, therefore achieving a rate dependent power saving of up to 40 times, while keeping switching times at 1.5 ns. Maximum achievable event rate was 13.7 Meps at 638 Mbps serial bit rate. Additionally, differential voltage swing is tunable, thus allowing further power reductions.
Study on Cell Error Rate of a Satellite ATM System Based on CDMA
Institute of Scientific and Technical Information of China (English)
赵彤宇; 张乃通
2003-01-01
In this paper, the cell error rate (CER) of a CDMA-based satellite ATM system is analyzed. Two fading models, i.e. the partial fading model and the total fading model are presented according to multi-path propagation fading and shadow effect. Based on the total shadow model, the relation of CER vs. the number of subscribers at various elevations under 2D-RAKE receiving and non-diversity receiving is got. The impact on cell error rate with pseudo noise (PN) code length is also considered. The result that the maximum likelihood combination of multi-path signal would not improve the system performance when multiple access interference (MAI) is small, on the contrary the performance may be even worse is abtained.
Smadi, Mahmoud A.
2012-12-06
In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.
LaPorte, Gerald M; Stephens, Joseph C; Beuchel, Amanda K
2010-01-01
The examination of printing defects, or imperfections, found on printed or copied documents has been recognized as a generally accepted approach for linking questioned documents to a common source. This research paper will highlight the results from two mutually exclusive studies. The first involved the examination and characterization of printing defects found in a controlled production run of 500,000 envelopes bearing text and images. It was concluded that printing defects are random occurrences and that morphological differences can be used to identify variations within the same production batch. The second part incorporated a blind study to assess the error rate of associating randomly selected envelopes from different retail locations to a known source. The examination was based on the comparison of printing defects in the security patterns found in some envelopes. The results demonstrated that it is possible to associate envelopes to a common origin with a 0% error rate.
Chen, Jian; Dutton, Zachary; Lazarus, Richard; Guha, Saikat
2011-01-01
The quantum states of two laser pulses---coherent states---are never mutually orthogonal, making perfect discrimination impossible. Even so, coherent states can achieve the ultimate quantum limit for capacity of a classical channel, the Holevo capacity. Attaining this requires the receiver to make joint-detection measurements on long codeword blocks, optical implementations of which remain unknown. We report the first experimental demonstration of a joint-detection receiver, demodulating quaternary pulse-position-modulation (PPM) codewords at a word error rate of up to 40% (2.2 dB) below that attained with direct-detection, the largest error-rate improvement over the standard quantum limit reported to date. This is accomplished with a conditional nulling receiver, which uses optimized-amplitude coherent pulse nulling, single photon detection and quantum feedforward. We further show how this translates into coding complexity improvements for practical PPM systems, such as in deep-space communication. We antici...
A minimum-error, energy-constrained neural code is an instantaneous-rate code.
Johnson, Erik C; Jones, Douglas L; Ratnam, Rama
2016-04-01
Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals.
Protected Polycrystalline Diamond Compact Bits For Hard Rock Drilling
Energy Technology Data Exchange (ETDEWEB)
Robert Lee Cardenas
2000-10-31
Two bits were designed. One bit was fabricated and tested at Terra-Tek's Drilling Research Laboratory. Fabrication of the second bit was not completed due to complications in fabrication and meeting scheduled test dates at the test facility. A conical bit was tested in a Carthage Marble (compressive strength 14,500 psi) and Sierra White Granite (compressive strength 28,200 psi). During the testing, Hydraulic Horsepower, Bit Weight, Rotation Rate, were varied for the Conical Bit, a Varel Tricone Bit and Varel PDC bit. The Conical Bi did cut rock at a reasonable rate in both rocks. Beneficial effects from the near and through cutter water nozzles were not evident in the marble due to test conditions and were not conclusive in the granite due to test conditions. At atmospheric drilling, the Conical Bit's penetration rate was as good as the standard PDC bit and better than the Tricone Bit. Torque requirements for the Conical Bit were higher than that required for the Standard Bits. Spudding the conical bit into the rock required some care to avoid overloading the nose cutters. The nose design should be evaluated to improve the bit's spudding characteristics.
Comparing Response Times and Error Rates in a Simultaneous Masking Paradigm
Directory of Open Access Journals (Sweden)
F Hermens
2014-08-01
Full Text Available In simultaneous masking, performance on a foveally presented target is impaired by one or more flanking elements. Previous studies have demonstrated strong effects of the grouping of the target and the flankers on the strength of masking (e.g., Malania, Herzog & Westheimer, 2007. These studies have predominantly examined performance by measuring offset discrimination thresholds as a measure of performance, and it is therefore unclear whether other measures of performance provide similar outcomes. A recent study, which examined the role of grouping on error rates and response times in a speeded vernier offset discrimination task, similar to that used by Malania et al. (2007, suggested a possible dissociation between the two measures, with error rates mimicking threshold performance, but response times showing differential results (Panis & Hermens, 2014. We here report the outcomes of three experiments examining this possible dissociation, and demonstrate an overall similar pattern of results for error rates and response times across a broad range of mask layouts. Moreover, the pattern of results in our experiments strongly correlates with threshold performance reported earlier (Malania et al., 2007. Our results suggest that outcomes in a simultaneous masking paradigm do not critically depend on the outcome measure used, and therefore provide evidence for a common underlying mechanism.
Testing Error Correcting Codes by Multicanonical Sampling of Rare Events
Iba, Yukito; Hukushima, Koji
2007-01-01
The idea of rare event sampling is applied to the estimation of the performance of error-correcting codes. The essence of the idea is importance sampling of the pattern of noises in the channel by Multicanonical Monte Carlo, which enables efficient estimation of tails of the distribution of bit error rate. The idea is successfully tested with a convolutional code.
Error resilient H.264/AVC Video over Satellite for low Packet Loss Rates
DEFF Research Database (Denmark)
Aghito, Shankar Manuel; Forchhammer, Søren; Andersen, Jakob Dahl
2007-01-01
The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The ne...... Inmarsat BGAN system at 256 kbit/s is used as test case. This systems operates at low loss rates guaranteeing a packet loss rate of not more than 10~3. For high-end applications as 'reporter-in-the-field' live broadcast, it is crucial to obtain high quality without increasing delay....
Improved Error Thresholds for Measurement-Free Error Correction
Crow, Daniel; Joynt, Robert; Saffman, M.
2016-09-01
Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.
Influenza infection rates, measurement errors and the interpretation of paired serology.
Directory of Open Access Journals (Sweden)
Simon Cauchemez
Full Text Available Serological studies are the gold standard method to estimate influenza infection attack rates (ARs in human populations. In a common protocol, blood samples are collected before and after the epidemic in a cohort of individuals; and a rise in haemagglutination-inhibition (HI antibody titers during the epidemic is considered as a marker of infection. Because of inherent measurement errors, a 2-fold rise is usually considered as insufficient evidence for infection and seroconversion is therefore typically defined as a 4-fold rise or more. Here, we revisit this widely accepted 70-year old criterion. We develop a Markov chain Monte Carlo data augmentation model to quantify measurement errors and reconstruct the distribution of latent true serological status in a Vietnamese 3-year serological cohort, in which replicate measurements were available. We estimate that the 1-sided probability of a 2-fold error is 9.3% (95% Credible Interval, CI: 3.3%, 17.6% when antibody titer is below 10 but is 20.2% (95% CI: 15.9%, 24.0% otherwise. After correction for measurement errors, we find that the proportion of individuals with 2-fold rises in antibody titers was too large to be explained by measurement errors alone. Estimates of ARs vary greatly depending on whether those individuals are included in the definition of the infected population. A simulation study shows that our method is unbiased. The 4-fold rise case definition is relevant when aiming at a specific diagnostic for individual cases, but the justification is less obvious when the objective is to estimate ARs. In particular, it may lead to large underestimates of ARs. Determining which biological phenomenon contributes most to 2-fold rises in antibody titers is essential to assess bias with the traditional case definition and offer improved estimates of influenza ARs.
Error baseline rates of five sample preparation methods used to characterize RNA virus populations
Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.
2017-01-01
Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717
Learning High-Dimensional Markov Forest Distributions: Analysis of Error Rates
Tan, Vincent Y F; Willsky, Alan S
2010-01-01
The problem of learning forest-structured discrete graphical models from i.i.d. samples is considered. An algorithm based on pruning of the Chow-Liu tree through adaptive thresholding is proposed. It is shown that this algorithm is both structurally consistent and risk consistent and the error probability of structure learning decays faster than any polynomial in the number of samples under fixed model size. For the high-dimensional scenario where the size of the model d and the number of edges k scale with the number of samples n, sufficient conditions on (n,d,k) are given for the algorithm to satisfy structural and risk consistencies. In addition, the extremal structures for learning are identified; we prove that the independent (resp. tree) model is the hardest (resp. easiest) to learn using the proposed algorithm in terms of error rates for structure learning.
On the symmetric α-stable distribution with application to symbol error rate calculations
Soury, Hamza
2016-12-24
The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.
Stability Comparison of Recordable Optical Discs—A Study of Error Rates in Harsh Conditions
Slattery, Oliver; Lu, Richang; Zheng, Jian; Byers, Fred; Tang, Xiao
2004-01-01
The reliability and longevity of any storage medium is a key issue for archivists and preservationists as well as for the creators of important information. This is particularly true in the case of digital media such as DVD and CD where a sufficient number of errors may render the disc unreadable. This paper describes an initial stability study of commercially available recordable DVD and CD media using accelerated aging tests under conditions of increased temperature and humidity. The effect of prolonged exposure to direct light is also investigated and shown to have an effect on the error rates of the media. Initial results show that high quality optical media have very stable characteristics and may be suitable for long-term storage applications. However, results also indicate that significant differences exist in the stability of recordable optical media from different manufacturers. PMID:27366630
Institute of Scientific and Technical Information of China (English)
SUN Liuquan; ZHENG Zhongguo
1999-01-01
A central limit theorem for the integrated square error (ISE)of the kernel hazard rate estimators is obtained based on left truncated and right censored data.An asymptotic representation of the mean integrated square error(MISE) for the kernel hazard rate estimators is also presented.
Forward error correction based on algebraic-geometric theory
A Alzubi, Jafar; M Chen, Thomas
2014-01-01
This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.
A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
KEAMANAN CITRA DENGAN WATERMARKING MENGGUNAKAN PENGEMBANGAN ALGORITMA LEAST SIGNIFICANT BIT
Directory of Open Access Journals (Sweden)
Kurniawan Kurniawan
2015-01-01
Full Text Available Image security is a process to save digital. One method of securing image digital is watermarking using Least Significant Bit algorithm. Main concept of image security using LSB algorithm is to replace bit value of image at specific location so that created pattern. The pattern result of replacing the bit value of image is called by watermark. Giving watermark at image digital using LSB algorithm has simple concept so that the information which is embedded will lost easily when attacked such as noise attack or compression. So need modification like development of LSB algorithm. This is done to decrease distortion of watermark information against those attacks. In this research is divided by 6 process which are color extraction of cover image, busy area search, watermark embed, count the accuracy of watermark embed, watermark extraction, and count the accuracy of watermark extraction. Color extraction of cover image is process to get blue color component from cover image. Watermark information will embed at busy area by search the area which has the greatest number of unsure from cover image. Then watermark image is embedded into cover image so that produce watermarked image using some development of LSB algorithm and search the accuracy by count the Peak Signal to Noise Ratio value. Before the watermarked image is extracted, need to test by giving noise and doing compression into jpg format. The accuracy of extraction result is searched by count the Bit Error Rate value.
Stinger Enhanced Drill Bits For EGS
Energy Technology Data Exchange (ETDEWEB)
Durrand, Christopher J. [Novatek International, Inc., Provo, UT (United States); Skeem, Marcus R. [Novatek International, Inc., Provo, UT (United States); Crockett, Ron B. [Novatek International, Inc., Provo, UT (United States); Hall, David R. [Novatek International, Inc., Provo, UT (United States)
2013-04-29
The project objectives were to design, engineer, test, and commercialize a drill bit suitable for drilling in hard rock and high temperature environments (10,000 meters) likely to be encountered in drilling enhanced geothermal wells. The goal is provide a drill bit that can aid in the increased penetration rate of three times over conventional drilling. Novatek has sought to leverage its polycrystalline diamond technology and a new conical cutter shape, known as the Stinger®, for this purpose. Novatek has developed a fixed bladed bit, known as the JackBit®, populated with both shear cutter and Stingers that is currently being tested by major drilling companies for geothermal and oil and gas applications. The JackBit concept comprises a fixed bladed bit with a center indenter, referred to as the Jack. The JackBit has been extensively tested in the lab and in the field. The JackBit has been transferred to a major bit manufacturer and oil service company. Except for the attached published reports all other information is confidential.
Examining rating quality in writing assessment: rater agreement, error, and accuracy.
Wind, Stefanie A; Engelhard, George
2012-01-01
The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments.
Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise
Souri, Hamza
2015-06-01
The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.
Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise
DEFF Research Database (Denmark)
Christensen, Lars P.B.
2005-01-01
Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...... can be achieved by a Multiple-Input Multiple- Output(MIMO) whitening filter followed by a traditional BCJR algorithm. The Gauss-Markov noise model provides a reasonable approximation for co-channel interference, making it an interesting single-user detector for many multiuser communication systems...
Hawkins, C Matthew; Hall, Seth; Zhang, Bin; Towbin, Alexander J
2014-10-01
The purpose of this study was to evaluate and compare textual error rates and subtypes in radiology reports before and after implementation of department-wide structured reports. Randomly selected radiology reports that were generated following the implementation of department-wide structured reports were evaluated for textual errors by two radiologists. For each report, the text was compared to the corresponding audio file. Errors in each report were tabulated and classified. Error rates were compared to results from a prior study performed prior to implementation of structured reports. Calculated error rates included the average number of errors per report, average number of nongrammatical errors per report, the percentage of reports with an error, and the percentage of reports with a nongrammatical error. Identical versions of voice-recognition software were used for both studies. A total of 644 radiology reports were randomly evaluated as part of this study. There was a statistically significant reduction in the percentage of reports with nongrammatical errors (33 to 26%; p = 0.024). The likelihood of at least one missense omission error (omission errors that changed the meaning of a phrase or sentence) occurring in a report was significantly reduced from 3.5 to 1.2% (p = 0.0175). A statistically significant reduction in the likelihood of at least one comission error (retained statements from a standardized report that contradict the dictated findings or impression) occurring in a report was also observed (3.9 to 0.8%; p = 0.0007). Carefully constructed structured reports can help to reduce certain error types in radiology reports.
A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems
Directory of Open Access Journals (Sweden)
Dong Shi-Wei
2007-01-01
Full Text Available A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL test loops show that the proposed algorithm is efficient for practical DMT transmissions.
Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti
2014-06-01
Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.
An optical ultrafast random bit generator
Kanter, Ido; Aviad, Yaara; Reidler, Igor; Cohen, Elad; Rosenbluh, Michael
2010-01-01
The generation of random bit sequences based on non-deterministic physical mechanisms is of paramount importance for cryptography and secure communications. High data rates also require extremely fast generation rates and robustness to external perturbations. Physical generators based on stochastic noise sources have been limited in bandwidth to ~100 Mbit s-1 generation rates. We present a physical random bit generator, based on a chaotic semiconductor laser, having time-delayed self-feedback, which operates reliably at rates up to 300 Gbit s-1. The method uses a high derivative of the digitized chaotic laser intensity and generates the random sequence by retaining a number of the least significant bits of the high derivative value. The method is insensitive to laser operational parameters and eliminates the necessity for all external constraints such as incommensurate sampling rates and laser external cavity round trip time. The randomness of long bit strings is verified by standard statistical tests.
The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.
Fadaee, Shannon B; Migliaccio, Americo A
2016-04-01
The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation.
Data-driven region-of-interest selection without inflating Type I error rate.
Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard
2017-01-01
In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies.
Modified Golden Codes for Improved Error Rates Through Low Complex Sphere Decoder
Directory of Open Access Journals (Sweden)
K.Thilagam
2013-05-01
Full Text Available n recent years, the golden codes have proven to ex hibit a superior performance in a wireless MIMO (Multiple Input Multiple Output scenario than any other code. However, a serious limitation associated with it is its increased deco ding complexity. This paper attempts to resolve this challenge through suitable modification of gol den code such that a less complex sphere decoder could be used without much compromising the error rates. In this paper, a minimum polynomial equation is introduced to obtain a reduc ed golden ratio (RGR number for golden code which demands only for a low complexity decodi ng procedure. One of the attractive approaches used in this paper is that the effective channel matrix has been exploited to perform a single symbol wise decoding instead of grouped sy mbols using a sphere decoder with tree search algorithm. It has been observed that the low decoding complexity of O (q 1.5 is obtained against conventional method of O (q 2.5 . Simulation analysis envisages that in addition t o reduced decoding, improved error rates is also obta ined.
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Partly linear regression model is useful in practice, but littleis investigated in the literature to adapt it to the real data which are dependent and conditionally heteroscedastic. In this paper, the estimators of the regression components are constructed via local polynomial fitting and the large sample properties are explored. Under certain mild regularities, the conditions are obtained to ensure that the estimators of the nonparametric component and its derivatives are consistent up to the convergence rates which are optimal in the i.i.d. case, and the estimator of the parametric component is root-n consistent with the same rate as for parametric model. The technique adopted in the proof differs from that used and corrects the errors in the reference by Hamilton and Truong under i.i.d. samples.
Celik, Cihangir
Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano
Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah
2016-07-01
Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown. Here, we estimated genotyping error rates in SNPs genotyped with double digest RAD sequencing from Mendelian incompatibilities in known mother-offspring dyads of Hoffman's two-toed sloth (Choloepus hoffmanni) across a range of coverage and sequence quality criteria, for both reference-aligned and de novo-assembled data sets. Genotyping error rates were more sensitive to coverage than sequence quality and low coverage yielded high error rates, particularly in de novo-assembled data sets. For example, coverage ≥5 yielded median genotyping error rates of ≥0.03 and ≥0.11 in reference-aligned and de novo-assembled data sets, respectively. Genotyping error rates declined to ≤0.01 in reference-aligned data sets with a coverage ≥30, but remained ≥0.04 in the de novo-assembled data sets. We observed approximately 10- and 13-fold declines in the number of loci sampled in the reference-aligned and de novo-assembled data sets when coverage was increased from ≥5 to ≥30 at quality score ≥30, respectively. Finally, we assessed the effects of genotyping coverage on a common population genetic application, parentage assignments, and showed that the proportion of incorrectly assigned maternities was relatively high at low coverage. Overall, our results suggest that the trade-off between sample size and genotyping error rates be considered prior to building sequencing libraries, reporting genotyping error rates become standard practice, and that effects of genotyping errors on inference be evaluated in restriction-enzyme-based SNP studies.
Bit corruption correlation and autocorrelation in a stochastic binary nano-bit system
Sa-nguansin, Suchittra
2014-10-01
The corruption process of a binary nano-bit model resulting from an interaction with N stochastically-independent Brownian agents (BAs) is studied with the help of Monte-Carlo simulations and analytic continuum theory to investigate the data corruption process through the measurement of the spatial two-point correlation and the autocorrelation of bit corruption at the origin. By taking into account a more realistic correlation between bits, this work will contribute to the understanding of the soft error or the corruption of data stored in nano-scale devices.
Numerical optimization of writer and media for bit patterned magnetic recording
Kovacs, A.; Oezelt, H.; Schabes, M. E.; Schrefl, T.
2016-07-01
In this work, we present a micromagnetic study of the performance potential of bit-patterned (BP) magnetic recording media via joint optimization of the design of the media and of the magnetic write heads. Because the design space is large and complex, we developed a novel computational framework suitable for parallel implementation on compute clusters. Our technique combines advanced global optimization algorithms and finite-element micromagnetic solvers. Targeting data bit densities of 4 Tb/in2, we optimize designs for centered, staggered, and shingled BP writing. The magnetization dynamics of the switching of the exchange-coupled composite BP islands of the media is treated micromagnetically. Our simulation framework takes into account not only the dynamics of on-track errors but also the thermally induced adjacent-track erasure. With co-optimized write heads, the results show superior performance of shingled BP magnetic recording where we identify two particular designs achieving write bit-error rates of 1.5 ×10-8 and 8.4 ×10-8 , respectively. A detailed description of the key design features of these designs is provided and contrasted with centered and staggered BP designs which yielded write bit error rates of only 2.8 ×10-3 (centered design) and 1.7 ×10-2 (staggered design) even under optimized conditions.
Ahmed, Qasim Zeeshan
2015-02-01
In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.
Four-Dimensional Coded Modulation with Bit-wise Decoders for Future Optical Communications
Alvarado, Alex
2014-01-01
Coded modulation (CM) is the combination of forward error correction (FEC) and multilevel constellations. Coherent optical communication systems result in a four-dimensional (4D) signal space, which naturally leads to 4D-CM transceivers. A practically attractive design paradigm is to use a bit-wise decoder, where the detection process is (suboptimally) separated into two steps: soft-decision demapping followed by binary decoding. In this paper, bit-wise decoders are studied from an information-theoretic viewpoint. 4D constellations with up to 4096 constellation points are considered. Metrics to predict the post-FEC bit-error rate (BER) of bit-wise decoders are analyzed. The mutual information is shown to fail at predicting the post-FEC BER of bit-wise decoders and the so-called generalized mutual information is shown to be a much more robust metric. It is also shown that constellations that transmit and receive information in each polarization and quadrature independently (e.g., PM-QPSK, PM-16QAM, and PM-64QA...
Morrell, Roger J.; Larson, David A.; Ruzzi, Peter L.
1994-01-01
A double acting bit holder that permits bits held in it to be resharpened during cutting action to increase energy efficiency by reducing the amount of small chips produced. The holder consist of: a stationary base portion capable of being fixed to a cutter head of an excavation machine and having an integral extension therefrom with a bore hole therethrough to accommodate a pin shaft; a movable portion coextensive with the base having a pin shaft integrally extending therefrom that is insertable in the bore hole of the base member to permit the moveable portion to rotate about the axis of the pin shaft; a recess in the movable portion of the holder to accommodate a shank of a bit; and a biased spring disposed in adjoining openings in the base and moveable portions of the holder to permit the moveable portion to pivot around the pin shaft during cutting action of a bit fixed in a turret to allow front, mid and back positions of the bit during cutting to lessen creation of small chip amounts and resharpen the bit during excavation use.
Evaluate the Word Error Rate of Binary Block Codes with Square Radius Probability Density Function
Chen, Xiaogang; Gu, Jian; Yang, Hongkui
2007-01-01
The word error rate (WER) of soft-decision-decoded binary block codes rarely has closed-form. Bounding techniques are widely used to evaluate the performance of maximum-likelihood decoding algorithm. But the existing bounds are not tight enough especially for low signal-to-noise ratios and become looser when a suboptimum decoding algorithm is used. This paper proposes a new concept named square radius probability density function (SR-PDF) of decision region to evaluate the WER. Based on the SR-PDF, The WER of binary block codes can be calculated precisely for ML and suboptimum decoders. Furthermore, for a long binary block code, SR-PDF can be approximated by Gamma distribution with only two parameters that can be measured easily. Using this property, two closed-form approximative expressions are proposed which are very close to the simulation results of the WER of interesting.
Error-rate performance analysis of incremental decode-and-forward opportunistic relaying
Tourki, Kamel
2010-10-01
In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.
Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying
Fareed, Muhammad Mehboob
2014-06-01
In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.
Tanaka, Ken'ichiro; Murashige, Sunao
2012-01-01
We present the convergence rates and the explicit error bounds of Hill's method, which is a numerical method for computing the spectra of ordinary differential operators with periodic coefficients. This method approximates the operator by a finite dimensional matrix. On the assumption that the operator is selfadjoint, it is shown that, under some conditions, we can obtain the convergence rates of eigenvalues with respect to the dimension and the explicit error bounds. Numerical examples demon...
Institute of Scientific and Technical Information of China (English)
LU; Zudi
2001-01-01
［1］Engle, R. F., Granger, C. W. J., Rice, J. et al., Semiparametric estimates of the relation between weather and electricity sales, Journal of the American Statistical Association, 1986, 81: 310.［2］Heckman, N. E., Spline smoothing in partly linear models, Journal of the Royal Statistical Society, Ser. B, 1986, 48: 244.［3］Rice, J., Convergence rates for partially splined models, Statistics & Probability Letters, 1986, 4: 203.［4］Chen, H., Convergence rates for parametric components in a partly linear model, Annals of Statistics, 1988, 16: 136.［5］Robinson, P. M., Root-n-consistent semiparametric regression, Econometrica, 1988, 56: 931.［6］Speckman, P., Kernel smoothing in partial linear models, Journal of the Royal Statistical Society, Ser. B, 1988, 50: 413.［7］Cuzick, J., Semiparametric additive regression, Journal of the Royal Statistical Society, Ser. B, 1992, 54: 831.［8］Cuzick, J., Efficient estimates in semiparametric additive regression models with unknown error distribution, Annals of Statistics, 1992, 20: 1129.［9］Chen, H., Shiau, J. H., A two-stage spline smoothing method for partially linear models, Journal of Statistical Planning & Inference, 1991, 27: 187.［10］Chen, H., Shiau, J. H., Data-driven efficient estimators for a partially linear model, Annals of Statistics, 1994, 22: 211.［11］Schick, A., Root-n consistent estimation in partly linear regression models, Statistics & Probability Letters, 1996, 28: 353.［12］Hamilton, S. A., Truong, Y. K., Local linear estimation in partly linear model, Journal of Multivariate Analysis, 1997, 60: 1.［13］Mills, T. C., The Econometric Modeling of Financial Time Series, Cambridge: Cambridge University Press, 1993, 137.［14］Engle, R. F., Autoregressive conditional heteroscedasticity with estimates of United Kingdom inflation, Econometrica, 1982, 50: 987.［15］Bera, A. K., Higgins, M. L., A survey of ARCH models: properties of estimation and testing, Journal of Economic
Directory of Open Access Journals (Sweden)
Fatemeh Vizeshfar
2015-06-01
Full Text Available Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic information, self report faults, etiology of medication error and researcher observations. The results showed that nurses’ faults in pediatric wards were 51/6% and in adults wards were 47/4%. The most common faults in adults wards were later or sooner drug administration (48/6%, and administration of drugs without prescription and administering wrong drugs were the most common medication errors in pediatric wards (each one 49/2%. According to researchers’ observations, the medication error rate of 57/9% was rated low in adults wards and the rate of 69/4% in pediatric wards was rated moderate. The most frequent medication errors in both adults and pediatric wards were that nurses didn’t explain the reason and type of drug they were going to administer to patients. Independent T-test showed a significant change in faults observations in pediatric wards (p=0.000 and in adults wards (p=0.000. Several studies have shown medication errors all over the world, especially in pediatric wards. However, by designing a suitable report system and use a multi disciplinary approach, we can be reduced the occurrence of medication errors and its negative consequences.
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
A Coded Bit-Loading Linear Precoded Discrete Multitone Solution for Power Line Communication
Muhammad, Fahad Syed; Hélard, Jean-François; Crussière, Matthieu
2008-01-01
Linear precoded discrete multitone modulation (LP-DMT) system has been already proved advantageous with adaptive resource allocation algorithm in a power line communication (PLC) context. In this paper, we investigate the bit and energy allocation algorithm of an adaptive LP-DMT system taking into account the channel coding scheme. A coded adaptive LP-DMT system is presented in the PLC context with a loading algorithm which ccommodates the channel coding gains in bit and energy calculations. The performance of a concatenated channel coding scheme, consisting of an inner Wei's 4-dimensional 16-states trellis code and an outer Reed-Solomon code, in combination with the roposed algorithm is analyzed. Simulation results are presented for a fixed target bit error rate in a multicarrier scenario under power spectral density constraint. Using a multipath model of PLC channel, it is shown that the proposed coded adaptive LP-DMT system performs better than classical coded discrete multitone.
Bias and spread in extreme value theory measurements of probability of error
Smith, J. G.
1972-01-01
Extreme value theory is examined to explain the cause of the bias and spread in performance of communications systems characterized by low bit rates and high data reliability requirements, for cases in which underlying noise is Gaussian or perturbed Gaussian. Experimental verification is presented and procedures that minimize these effects are suggested. Even under these conditions, however, extreme value theory test results are not particularly more significant than bit error rate tests.
Directory of Open Access Journals (Sweden)
VINOTH BABU K.
2016-04-01
Full Text Available Multi input multi output (MIMO and orthogonal frequency division multiplexing (OFDM are the key techniques for the future wireless communication systems. Previous research in the above areas mainly concentrated on spectral efficiency improvement and very limited work has been done in terms of energy efficient transmission. In addition to spectral efficiency improvement, energy efficiency improvement has become an important research because of the slow progressing nature of the battery technology. Since most of the user equipments (UE rely on battery, the energy required to transmit the target bits should be minimized to avoid quick battery drain. The frequency selective fading nature of the wireless channel reduces the spectral and energy efficiency of OFDM based systems. Dynamic bit loading (DBL is one of the suitable solution to improve the spectral and energy efficiency of OFDM system in frequency selective fading environment. Simple dynamic bit loading (SDBL algorithm is identified to offer better energy efficiency with less system complexity. It is well suited for fixed data rate voice/video applications. When the number of target bits are very much larger than the available subcarriers, the conventional single input single output (SISO-SDBL scheme offers high bit error rate (BER and needs large transmit energy. To improve bit error performance we combine space frequency block codes (SFBC with SDBL, where the adaptations are done in both frequency and spatial domain. To improve the quality of service (QoS further, optimal transmit antenna selection (OTAS scheme is also combined with SFBC-SDBL scheme. The simulation results prove that the proposed schemes offer better QoS when compared to the conventional SISOSDBL scheme.
Directory of Open Access Journals (Sweden)
Casey P Durand
Full Text Available INTRODUCTION: Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. METHODS: A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. RESULTS: In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. CONCLUSIONS: Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.
Superdense Coding Interleaved with Forward Error Correction
Directory of Open Access Journals (Sweden)
Sadlier Ronald J.
2016-01-01
Full Text Available Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. We conclude that classical FEC with interleaving is a useful method to improve the performance in near-term demonstrations of superdense coding.
Directory of Open Access Journals (Sweden)
P. N. V. M SASTRY
2015-01-01
Full Text Available The Aim is to HDL Design & Implementation for Exa Bit Rate Multichannel 64:1 LVDS Data Serializer & De-Serializer ASIC Array Card for Ultra High Speed Wireless Communication Products like Network On Chip Routers, Data Bus Communication Interface Applications, Cloud Computing Networks , Zetta bit Ethernet at Zetta Bit Rate Of Data Transfer Speed. Basically This Serializer Array Converts 64 bit parallel Data Array in to Serial Array Form on Transmitter Side and Transmission Done through High Speed Wireless Serial Communication Link and also Converts this Same Serial Array Data into Parallel Data Array on the Receiver Side by De-Serializer Array ASIC without any noise, also measure Very High Compressed Jitter Tolerance & Eye Diagram, Bit Error Rate through Analyzer. This LVDS Data SER-De-SER mainly used in High Speed Bus Communication Protocol Transceivers, Interface FPGA Add On Cards. The Process Of Design is Implemented through Verilog HDL / VHDL, Programming & Debugging Done Latest FPGA Board.
Numerical optimization of writer geometries for bit patterned magnetic recording
Kovacs, A; Bance, S; Fischbacher, J; Gusenbauer, M; Reichel, F; Exl, L; Schrefl, T; Schabes, M E
2015-01-01
A fully-automated pole-tip shape optimization tool, involving write head geometry construction, meshing, micromagnetic simulation and evaluation, is presented. Optimizations have been performed for three different writing schemes (centered, staggered and shingled) for an underlying bit patterned media with an areal density of 2.12 Tdots/in$^2$ . Optimizations were performed for a single-phase media with 10 nm thickness and a mag spacing of 8 nm. From the computed write field and its gradient and the minimum energy barrier during writing for islands on the adjacent track, the overall write error rate is computed. The overall write errors are 0.7, 0.08, and 2.8 x 10$^{-5}$ for centered writing, staggered writing, and shingled writing.
Bit Loading Algorithms for Cooperative OFDM Systems
Directory of Open Access Journals (Sweden)
Gui Bo
2008-01-01
Full Text Available Abstract We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.
Tamaki, Hirofumi; Satoh, Hiroki; Hori, Satoko; Sawada, Yasufumi
2012-01-01
Confusion of drug names is one of the most common causes of drug-related medical errors. A similarity measure of drug names, "vwhtfrag", was developed to discriminate whether drug name pairs are likely to cause confusion errors, and to provide information that would be helpful to avoid errors. The aim of the present study was to evaluate and improve vwhtfrag. Firstly, we evaluated the correlation of vwhtfrag with subjective similarity or error rate of drug name pairs in psychological experiments. Vwhtfrag showed a higher correlation to subjective similarity (college students: r=0.84) or error rate than did other conventional similarity measures (htco, cos1, edit). Moreover, name pairs that showed coincidences of the initial character strings had a higher subjective similarity than those which had coincidences of the end character strings and had the same vwhtfrag. Therefore, we developed a new similarity measure (vwhtfrag+), in which coincidence of initial character strings in name pairs is weighted by 1.53 times over coincidence of end character strings. Vwhtfrag+ showed a higher correlation to subjective similarity than did unmodified vwhtfrag. Further studies appear warranted to examine in detail whether vwhtfrag+ has superior ability to discriminate drug name pairs likely to cause confusion errors.
Tanaka, Ken'ichiro
2012-01-01
We present the convergence rates and the explicit error bounds of Hill's method, which is a numerical method for computing the spectra of ordinary differential operators with periodic coefficients. This method approximates the operator by a finite dimensional matrix. On the assumption that the operator is selfadjoint, it is shown that, under some conditions, we can obtain the convergence rates of eigenvalues with respect to the dimension and the explicit error bounds. Numerical examples demonstrate that we can verify these conditions using Gershgorin's theorem for some real problems. Main theorems are proved using the Dunford integrals which project an eigenvector to the corresponding eigenspace.
Step angles to reduce the north-finding error caused by rate random walk with fiber optic gyroscope.
Wang, Qin; Xie, Jun; Yang, Chuanchuan; He, Changhong; Wang, Xinyue; Wang, Ziyu
2015-10-20
We study the relationship between the step angles and the accuracy of north finding with fiber optic gyroscopes. A north-finding method with optimized step angles is proposed to reduce the errors caused by rate random walk (RRW). Based on this method, the errors caused by both angle random walk and RRW are reduced by increasing the number of positions. For when the number of positions is even, we proposed a north-finding method with symmetric step angles that can reduce the error caused by RRW and is not affected by the azimuth angles. Experimental results show that, compared with the traditional north-finding method, the proposed methods with the optimized step angles and the symmetric step angles can reduce the north-finding errors by 67.5% and 62.5%, respectively. The method with symmetric step angles is not affected by the azimuth angles and can offer consistent high accuracy for any azimuth angles.
基于人眼视觉特性的低比特率图像压缩编码%Improved iow bit-rate image coding scheme based on human visual system.
Institute of Scientific and Technical Information of China (English)
王力; 王向阳
2011-01-01
为了提高编码速度和改进视觉效果,提出了一种改进的低比特率SPIHT算法.算法首先用提升小波对原始图像进行分解,再结合人眼视觉掩蔽特性,对不同区域内图像信息所对应的小波系数赋予不同视觉权值,以保证优先传输视觉上的最重要系数,最后根据系数重要性,对低频系数采用新的排列结构,并采用SPIHT编码思想完成图像压缩.实验结果表明,该算法压缩效果优于SPIHT图像压缩方案,特别在低比特率下.%A improved low bit-rate image coding scheme is presented. The lifting wavelet transform is applied to the original image,and the visual gains are achieved by exploiting Human Visual System(HVS) characteristics to weigh wavelet coefficients according to their perceptual importance. Lastly, the lower frequency exploits a new partitioning structure based on the importance of coefficients.The experimental results showthat the modified image compression scheme provides higher perceptual quality and higher PSNR than SPIHT,especially at low bit rates.
Temporal Error Concealment Technique for MPEG-4 Video Streams
Institute of Scientific and Technical Information of China (English)
DING Xuewen; YANG Zhaoxuan; GUO Yingchun
2006-01-01
Concerning inter4v mode employed widely in MPEG-4 video, a new temporal error concealment scheme for MPEG-4 video sequences is proposed, which can selectively interpolate one or four motion vectors (MVs) for the missing macroblock (MB) according to the estimated MB coding mode. Performance of the proposed scheme is compared with the existing schemes with multiple testing sequences at different bit error rates. Experimental results show that the proposed algorithm can mask the impairments caused by transmission errors more efficiently than 0 MV and average MV methods by consuming more time for different bit error rates. It has an acceptable image quality close to that obtained by the selective motion vector matching (SMVM) algorithm, while tak ing less than half of cycles of operations. The proposed concealment scheme is suitable for low complexity video real-time implementations.
Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J
2014-01-01
Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural 'mutation', can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such 'time constraints' affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D 'target handaxe form' using a standardized foam block and a plastic knife. Three distinct 'time conditions' were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific 'threshold' might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a 'threshold' effect, below which mutation rates increase more markedly. Our results also suggest that 'time budgets' available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, 'time-budgeting' factors need to be given greater consideration in evolutionary models of material culture change.
Institute of Scientific and Technical Information of China (English)
Yang Yukun; Han Tao
1995-01-01
@@ The geologic condition of Shengli Oilfield (SLOF)is complicated and the range of the rock drillability is wide. For more than 20 years,Shengli Drilling Technology Research Institute, in view of the formation conditions of SLOF,has done a lot of effort and obtained many achivements in design,manufacturing technology and field service. Up to now ,the institute has developed several ten kinds of diamond bits applicable for drilling and coring in formations from extremely soft to hard.
Serialized quantum error correction protocol for high-bandwidth quantum repeaters
Glaudell, A. N.; Waks, E.; Taylor, J. M.
2016-09-01
Advances in single-photon creation, transmission, and detection suggest that sending quantum information over optical fibers may have losses low enough to be correctable using a quantum error correcting code (QECC). Such error-corrected communication is equivalent to a novel quantum repeater scheme, but crucial questions regarding implementation and system requirements remain open. Here we show that long-range entangled bit generation with rates approaching 108 entangled bits per second may be possible using a completely serialized protocol, in which photons are generated, entangled, and error corrected via sequential, one-way interactions with as few matter qubits as possible. Provided loss and error rates of the required elements are below the threshold for quantum error correction, this scheme demonstrates improved performance over transmission of single photons. We find improvement in entangled bit rates at large distances using this serial protocol and various QECCs. In particular, at a total distance of 500 km with fiber loss rates of 0.3 dB km-1, logical gate failure probabilities of 10-5, photon creation and measurement error rates of 10-5, and a gate speed of 80 ps, we find the maximum single repeater chain entangled bit rates of 51 Hz at a 20 m node spacing and 190 000 Hz at a 43 m node spacing for the {[[3,1,2
Kim, Jihye
2010-01-01
In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…
Fountain, Emily D.; Pauli, Jonathan N.; Reid, Brendan N.; Palsboll, Per J.; Peery, M. Zachariah
2016-01-01
Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown.
de la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Núñez-Antón, Vicente
2016-07-08
Consider longitudinal observations across different subjects such that the underlying distribution is determined by a non-linear mixed-effects model. In this context, we look at the misclassification error rate for allocating future subjects using cross-validation, bootstrap algorithms (parametric bootstrap, leave-one-out, .632 and [Formula: see text]), and bootstrap cross-validation (which combines the first two approaches), and conduct a numerical study to compare the performance of the different methods. The simulation and comparisons in this study are motivated by real observations from a pregnancy study in which one of the main objectives is to predict normal versus abnormal pregnancy outcomes based on information gathered at early stages. Since in this type of studies it is not uncommon to have insufficient data to simultaneously solve the classification problem and estimate the misclassification error rate, we put special attention to situations when only a small sample size is available. We discuss how the misclassification error rate estimates may be affected by the sample size in terms of variability and bias, and examine conditions under which the misclassification error rate estimates perform reasonably well.
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Demonstration of a Bit-Flip Correction for Enhanced Sensitivity Measurements
Cohen, L; Istrati, D; Retzker, A; Eisenberg, H S
2016-01-01
The sensitivity of classical and quantum sensing is impaired in a noisy environment. Thus, one of the main challenges facing sensing protocols is to reduce the noise while preserving the signal. State of the art quantum sensing protocols that rely on dynamical decoupling achieve this goal under the restriction of long noise correlation times. We implement a proof of principle experiment of a protocol to recover sensitivity by using an error correction for photonic systems that does not have this restriction. The protocol uses a protected entangled qubit to correct a bit-flip error. Our results show a recovery of about 87% of the sensitivity, independent of the noise rate.
Soury, Hamza
2014-06-01
This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox\\'s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.
Energy Technology Data Exchange (ETDEWEB)
Wojtas, H
2004-07-01
The main source of errors in measuring the corrosion rate of rebars on site is a non-uniform current distribution between the small counter electrode (CE) on the concrete surface and the large rebar network. Guard ring electrodes (GEs) are used in an attempt to confine the excitation current within a defined area. In order to better understand the functioning of modulated guard ring electrode and to assess its effectiveness in eliminating errors due to lateral spread of current signal from the small CE, measurements of the polarisation resistance performed on a concrete beam have been numerically simulated. Effect of parameters such as rebar corrosion activity, concrete resistivity, concrete cover depth and size of the corroding area on errors in the estimation of polarisation resistance of a single rebar has been examined. The results indicate that modulated GE arrangement fails to confine the lateral spread of the CE current within a constant area. Using the constant diameter of confinement for the calculation of corrosion rate may lead to serious errors when test conditions change. When high corrosion activity of rebar and/or local corrosion occur, the use of the modulated GE confinement may lead to significant underestimation of the corrosion rate.
Groen, Yvonne; Mulder, Lambertus J. M.; Wijers, Albertus A.; Minderaa, Ruud B.; Althaus, Monika
2009-01-01
Attention Deficit Hyperactivity Disorder (ADHD) is a developmental disorder that has previously been related to a decreased sensitivity to errors and feedback. Supplementary to the traditional performance measures, this study uses autonomic measures to study this decreased sensitivity in ADHD and th
Sharp Threshold Detection Based on Sup-norm Error rates in High-dimensional Models
DEFF Research Database (Denmark)
Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl
focused almost exclusively on estimation errors in stronger norms. We show that this sup-norm bound can be used to distinguish between zero and non-zero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent...
Sharp threshold detection based on sup-norm error rates in high-dimensional models
DEFF Research Database (Denmark)
Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl
2017-01-01
almost exclusively on ℓ1 and ℓ2 estimation errors. We show that this sup-norm bound can be used to distinguish between zero and non-zero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent variable...
Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)
2001-01-01
A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.
DEFF Research Database (Denmark)
Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan
2017-01-01
Background Medication errors have received extensive attention in recent decades and are of significant concern to healthcare organisations globally. Medication errors occur frequently, and adverse events associated with medications are one of the largest causes of harm to hospitalised patients....... Reviews have suggested that up to 50% of the adverse events in the medication process may be preventable. Thus the medication process is an important means to improve safety. Purpose The objective of this study was to evaluate the effectiveness of two automated medication systems in reducing...... the medication administration error rate in comparison with current practice. Material and methods This was a controlled before and after study with follow-up after 7 and 14 months. The study was conducted in two acute medical hospital wards. Two automated medication systems were tested: (1) automated dispensing...
Institute of Scientific and Technical Information of China (English)
李松斌; 黄永峰; 卢记仓
2013-01-01
QIM(Quantization Index Modulation,量化索引调制)隐写在标量或矢量量化时嵌入机密信息,可在语音压缩编码过程中进行高隐蔽性的信息隐藏,文中试图对该种隐写进行检测.文中发现该种隐写将导致压缩语音流中的音素分布特性发生改变,提出了音素向量空间模型和音素状态转移模型对音素分布特性进行了量化表示.基于所得量化特征并结合SVM(Support Vector Machine,支持向量机)构建了隐写检测器.针对典型的低速率语音编码标准G.729以及G.723.1的实验表明,文中方法性能远优于现有检测方法,实现了对QIM隐写的快速准确检测.%Quantization Index Modulation (QIM) steganography,which embeds the secret information during the Vector Quantization,can hide information in low bit-rate speech codec with high imperceptibility.This paper tries to detect this type of steganography.For this purpose,starting from the speech generation and compress coding theory,this paper firstly analyzes the possible significant feature degradation through the QIM steganography in compressed audio stream deeply.And it finds that the QIM steganography will disturb the phoneme sequence in the stream,and inevitably make the imbalance and correlation characteristics of phoneme distribution in the sequence change.According to this discovery,this paper adopts the phoneme distribution characteristics as the key for the detection of the QIM steganography.In order to get the quantitative features of phoneme distribution characteristics,this paper designs the Phoneme Vector Space Model and the Phoneme State Transition Model to quantify the imbalance and correlation characteristics respectively.By combining the quantitative vector features with supervised learning classifier,this paper builds a high performance detector towards the QIM steganography in low bit-rate speech codec.The experiments show that,for the two typical low bit-rate speech codec:G.729 and G.723.1,the
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
Error rates, PCR recombination, and sampling depth in HIV-1 whole genome deep sequencing.
Zanini, Fabio; Brodin, Johanna; Albert, Jan; Neher, Richard A
2016-12-27
Deep sequencing is a powerful and cost-effective tool to characterize the genetic diversity and evolution of virus populations. While modern sequencing instruments readily cover viral genomes many thousand fold and very rare variants can in principle be detected, sequencing errors, amplification biases, and other artifacts can limit sensitivity and complicate data interpretation. For this reason, the number of studies using whole genome deep sequencing to characterize viral quasi-species in clinical samples is still limited. We have previously undertaken a large scale whole genome deep sequencing study of HIV-1 populations. Here we discuss the challenges, error profiles, control experiments, and computational test we developed to quantify the accuracy of variant frequency estimation.
Error Protection for Scalable Image Over 3G-IP Network
Institute of Scientific and Technical Information of China (English)
WANGGuijin; LINXinggang
2003-01-01
Digital media, like image and video, transmitred over third-generation wireless networks is a challenging task because the wireless networks present not only packet loss, but also bit errors. To address this problem, this paper proposes a novel error protection scheme for scalable image over 3G-IP networks. Taking into consideration of the scalable nature of the image data, error protection is provided by layered product channel codes to mitigate the effect of the packet loss and bit errors. Meanwhile, rate-distortion optimization is presented to determine the protection levels of both the row and the column codes so as to minimize the expected end-to-end distortion. Simulation results show that our proposed approach is very efficient at a wide range of bits budget and under different channel conditions.
Adaptive Error Resilience for Video Streaming
Directory of Open Access Journals (Sweden)
Lakshmi R. Siruvuri
2009-01-01
Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.
Robust relativistic bit commitment
Chakraborty, Kaushik; Chailloux, André; Leverrier, Anthony
2016-12-01
Relativistic cryptography exploits the fact that no information can travel faster than the speed of light in order to obtain security guarantees that cannot be achieved from the laws of quantum mechanics alone. Recently, Lunghi et al. [Phys. Rev. Lett. 115, 030502 (2015), 10.1103/PhysRevLett.115.030502] presented a bit-commitment scheme where each party uses two agents that exchange classical information in a synchronized fashion, and that is both hiding and binding. A caveat is that the commitment time is intrinsically limited by the spatial configuration of the players, and increasing this time requires the agents to exchange messages during the whole duration of the protocol. While such a solution remains computationally attractive, its practicality is severely limited in realistic settings since all communication must remain perfectly synchronized at all times. In this work, we introduce a robust protocol for relativistic bit commitment that tolerates failures of the classical communication network. This is done by adding a third agent to both parties. Our scheme provides a quadratic improvement in terms of expected sustain time compared with the original protocol, while retaining the same level of security.
Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor
2016-01-01
When an Unmanned Aircraft System (UAS) encounters an intruder and is unable to maintain required temporal and spatial separation between the two vehicles, it is referred to as a loss of well-clear. In this state, the UAS must make its best attempt to regain separation while maximizing the minimum separation between itself and the intruder. When encountering a non-cooperative intruder (an aircraft operating under visual flight rules without ADS-B or an active transponder) the UAS must rely on the radar system to provide the intruders location, velocity, and heading information. As many UAS have limited climb and descent performance, vertical position andor vertical rate errors make it difficult to determine whether an intruder will pass above or below them. To account for that, there is a proposal by RTCA Special Committee 228 to prohibit guidance systems from providing vertical guidance to regain well-clear to UAS in an encounter with a non-cooperative intruder unless their radar system has vertical position error below 175 feet (95) and vertical velocity errors below 200 fpm (95). Two sets of fast-time parametric studies was conducted, each with 54000 pairwise encounters between a UAS and non-cooperative intruder to determine the suitability of offering vertical guidance to regain well clear to a UAS in the presence of radar sensor noise. The UAS was not allowed to maneuver until it received well-clear recovery guidance. The maximum severity of the loss of well-clear was logged and used as the primary indicator of the separation achieved by the UAS. One set of 54000 encounters allowed the UAS to maneuver either vertically or horizontally, while the second permitted horizontal maneuvers, only. Comparing the two data sets allowed researchers to see the effect of allowing vertical guidance to a UAS for a particular encounter and vertical rate error. Study results show there is a small reduction in the average severity of a loss of well-clear when vertical maneuvers
The type I error rate for in vivo Comet assay data when the hierarchical structure is disregarded
DEFF Research Database (Denmark)
Hansen, Merete Kjær; Kulahci, Murat
The Comet assay is a sensitive technique for detection of DNA strand breaks. The experimental design of in vivo Comet assay studies are often hierarchically structured, which should be reWected in the statistical analysis. However, the hierarchical structure sometimes seems to be disregarded......, and this imposes considerable impact on the type I error rate. This study aims to demonstrate the implications that result from disregarding the hierarchical structure. DiUerent combinations of the factor levels as they appear in a literature study give type I error rates up to 0.51 and for all combinations...... the exposition of the statistical methodology and to suitably account for the hierarchical structure of Comet assay data whenever present....
Carrier Synchronization for 3-and 4-bit-per-Symbol Optical Transmission
Ip, Ezra; Kahn, Joseph M.
2005-12-01
We investigate carrier synchronization for coherent detection of optical signals encoding 3 and 4 bits/symbol. We consider the effects of laser phase noise and of additive white Gaussian noise (AWGN), which can arise from local oscillator (LO) shot noise or LO-spontaneous beat noise. We identify 8-and 16-ary quadrature amplitude modulation (QAM) schemes that perform well when the receiver phase-locked loop (PLL) tracks the instantaneous signal phase with moderate phase error. We propose implementations of 8-and 16-QAM transmitters using Mach-Zehnder (MZ) modulators. We outline a numerical method for computing the bit error rate (BER) of 8-and 16-QAM in the presence of AWGN and phase error. It is found that these schemes can tolerate phase-error standard deviations of 2.48° and 1.24°, respectively, for a power penalty of 0.5 dB at a BER of 10-9. We propose a suitable PLL design and analyze its performance, taking account of laser phase noise, AWGN, and propagation delay within the PLL. Our analysis shows that the phase error depends on the constellation penalty, which is the mean power of constellation symbols times the mean inverse power. We establish a procedure for finding the optimal PLL natural frequency, and determine tolerable laser linewidths and PLL propagation delays. For zero propagation delay, 8-and 16-QAM can tolerate linewidth-to-bit-rate ratios of 1.8 × 10-5 and 1.4 × 10-6, respectively, assuming a total penalty of 1.0 dB.
Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok
2005-08-01
Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of
Evaluation of Bit Preservation Strategies
DEFF Research Database (Denmark)
Zierau, Eld; Kejser, Ulla Bøgvad; Kulovits, Hannes
2010-01-01
This article describes a methodology which supports evaluation of bit preservation strategies for different digital materials. This includes evaluation of alternative bit preservation solution. The methodology presented uses the preservation planning tool Plato for evaluations, and a BR...... for different digital material with different requirements for bit integrity and confidentiality. This case shows that the methodology, including the tools used, is useful for this purpose....
16 Bits DAC s Design, Simulation and Layout
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The high speed and precision 16 bits DAC will be applied in DSP (Digital Signal Processing) based on CSR pulsed power supply control system. In this application the DAC is needed to work in 1 μs’ converting data rate, 16 bit resolution and its output voltage is 10 volts.
Directory of Open Access Journals (Sweden)
Boulesteix Anne-Laure
2009-12-01
Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.
Error Rate Improvement in Underwater MIMO Communications Using Sparse Partial Response Equalization
2006-09-01
λn−kvi(k) vHi (k) (13) θi(n) = n∑ k=1 λn−kvi(k)x (s)H i (k) (14) are the (time averaged) output correlation matrix and the input-output cross...error vector [5] and Ki(n) is the RLS gain defined as αi(n) = x (s) i (n)− cHi (n− 1)vi(n) (17) Ki(n) = Pi(n− 1)vi(n) λi + vHi (n)Pi(n− 1)vi(n) · (18...Using equations 13, 14, and the matrix inversion lemma [5], the inverse correlation matrix Pi(n) can be updated as Pi(n) = [ I−Ki(n) vHi (n) ] Pi(n− 1
Directory of Open Access Journals (Sweden)
Bentsen R. G.
2006-12-01
Full Text Available Indirect methods are commonly employed to determine the fundamental flow properties needed to describe flow through porous media. Consequently, if one or more of the postulates underlying the mathematical description of such indirect methods is invalid, significant model error can be introduced into the measured value of the flow property. In particular, this study shows that effective mobility curves that include the effect of viscous coupling between fluid phases differ significantly from those that exclude such coupling. Moreover, it is shown that the conventional effective mobilities that pertain to steady-state, cocurrent flow, steady-state, countercurrent flow and pure countercurrent imbibition differ significantly. Thus, it appears that traditional effective mobilities are not true parameters; rather, they are infinitely nonunique. In addition, it is shown that, while neglect of hydrodynamic forces introduces a small amount of model error into the pressure difference curve for cocurrent flow in unconsolidated porous media, such neglect introduces a large amount of model error into the pressure difference curve for countercurrent flow in such porous media. Moreover, such neglect makes it difficult to explain why the pressure gradients that pertain to steady-state, countercurrent flow are opposite in sign. It is shown also that improper handling of the inlet boundary condition can introduce significant model error into the analysis. This is because, if a short core is used with one of the unsteady-state methods for determining effective mobility, it may take many pore volumes of injection before the inlet saturation rises to its maximal value, which is in contradiction with the usual assumption that the inlet saturation rises immediately to its maximal value. Finally, it is pointed out that, because of differences in flow regime and scale, the effective mobilities measured in the laboratory may not be appropriate for inclusion in the data
The effect of administrative boundaries and geocoding error on cancer rates in California.
Goldberg, Daniel W; Cockburn, Myles G
2012-04-01
Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods.
Giga-bit optical data transmission module for Beam Instrumentation
Roedne, L T; Cenkeramaddi, L R; Jiao, L
Particle accelerators require electronic instrumentation for diagnostic, assessment and monitoring during operation of the transferring and circulating beams. A sensor located near the beam provides an electrical signal related to the observable quantity of interest. The front-end electronics provides analog-to-digital conversion of the quantity being observed and the generated data are to be transferred to the external digital back-end for data processing, and to display to the operators and logging. This research project investigates the feasibility of radiation-tolerant giga-bit data transmission over optic fibre for beam instrumentation applications, starting from the assessment of the state of the art technology, identification of challenges and proposal of a system level solution, which should be validated with a PCB design in an experimental setup. Radiation tolerance of 10 kGy (Si) Total Ionizing Dose (TID) over 10 years of operation, Bit Error Rate (BER) 10-6 or better. The findings and results of th...
Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal
2016-09-30
Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd.
Parkash, Vinita; Fadare, Oluwole; Dewar, Rajan; Nakhleh, Raouf; Cooper, Kumarasen
2017-03-01
A repeat survey of the Association of the Directors of Anatomic and Surgical Pathology, done 10 years after the original was used to assess trends and variability in classifying scenarios as errors, and the preferred post signout report modification for correcting error by the membership of the Association of the Directors of Anatomic and Surgical Pathology. The results were analyzed to inform on whether interpretive amendment rates might act as surrogate measures of interpretive error in pathology. An analyses of the responses indicated that primary level misinterpretations (benign to malignant and vice versa) were universally qualified as error; secondary-level misinterpretations or misclassifications were inconsistently labeled error. There was added variability in the preferred post signout report modification used to correct report alterations. The classification of a scenario as error appeared to correlate with severity of potential harm of the missed call, the perceived subjectivity of the diagnosis, and ambiguity of reporting terminology. Substantial differences in policies for error detection and optimal reporting format were documented between departments. In conclusion, the inconsistency in labeling scenarios as error, disagreement about the optimal post signout report modification for the correction of the error, and variability in error detection policies preclude the use of the misinterpretation amendment rate as a surrogate measure for error in anatomic pathology. There is little change in uniformity of definition, attitudes and perception of interpretive error in anatomic pathology in the last 10 years.
Joint adaptive modulation and diversity combining with feedback error compensation
Choi, Seyeong
2009-11-01
This letter investigates the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of error-free feedback channels. We also propose to utilize adaptive diversity to compensate for the performance degradation due to feedback error. We accurately quantify the performance of the joint AMDC scheme in the presence of feedback error, in terms of the average number of combined paths, the average spectral efficiency, and the average bit error rate. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. It is observed that the proposed compensation strategy can offer considerable error performance improvement with little loss in processing power and spectral efficiency in comparison with the no compensation case. Copyright © 2009 IEEE.
Energy Technology Data Exchange (ETDEWEB)
Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL
2015-01-01
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
Institute of Scientific and Technical Information of China (English)
郝万明; 杨守义
2014-01-01
For orthogonal frequency division multiple access( OFDMA)-based cognitive radio systems, the transmission rate of every cognitive user must be an integer in practice, and the single cognitive user is on-ly considered in previous rate rounding algorithm. According to this situation, a new rate rounding algo-rithm is proposed in this paper, and it is modified based on the previous algorithm. Every subcarrier rate is adjusted once at most, which ensures the fairness at the rate rounding between cognitive users, and the to-tal bit rate is also improved. The simulation result shows that the fairness among cognitive users is im-proved effectively by the proposed algorithm.%在基于正交频分多址( OFDMA)的认知无线电系统中，每个认知用户在实际中都是以整数比特进行传输，而以往的速率取整算法只考虑了单认知用户。针对这种情况，提出了一种新的速率取整算法，该算法在原有算法的基础上进行了改进，让每个子载波最多参与一次速率的调整，从而使其在应用于多认知用户时保证了速率取整时的公平性，同时总的传输比特率比原算法有了一定的提高。仿真结果表明，所提算法有效提高了各认知用户在速率取整时的公平性。
Pauli Exchange Errors in Quantum Computation
Ruskai, M B
2000-01-01
We argue that a physically reasonable model of fault-tolerant computation requires the ability to correct a type of two-qubit error which we call Pauli exchange errors as well as one qubit errors. We give an explicit 9-qubit code which can handle both Pauli exchange errors and all one-bit errors.
Vieira, Daniel; Krems, Roman V.
2017-02-01
We present an approach using a combination of coupled channel scattering calculations with a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate constants for non-adiabatic transitions in inelastic atomic collisions to variations of the underlying adiabatic interaction potentials. Using this approach, we improve the previous computations of the rate constants for the fine-structure transitions in collisions of O({}3{P}j) with atomic H. We compute the error bars of the rate constants corresponding to 20% variations of the ab initio potentials and show that this method can be used to determine which of the individual adiabatic potentials are more or less important for the outcome of different fine-structure changing collisions.
Experimental unconditionally secure bit commitment
Liu, Yang; Cao, Yuan; Curty, Marcos; Liao, Sheng-Kai; Wang, Jian; Cui, Ke; Li, Yu-Huai; Lin, Ze-Hong; Sun, Qi-Chao; Li, Dong-Dong; Zhang, Hong-Fei; Zhao, Yong; Chen, Teng-Yun; Peng, Cheng-Zhi; Zhang, Qiang; Cabello, Adan; Pan, Jian-Wei
2014-03-01
Quantum physics allows unconditionally secure communication between parties that trust each other. However, when they do not trust each other such as in the bit commitment, quantum physics is not enough to guarantee security. Only when relativistic causality constraints combined, the unconditional secure bit commitment becomes feasible. Here we experimentally implement a quantum bit commitment with relativistic constraints that offers unconditional security. The commitment is made through quantum measurements in two quantum key distribution systems in which the results are transmitted via free-space optical communication to two agents separated with more than 20 km. Bits are successfully committed with less than 5 . 68 ×10-2 cheating probability. This provides an experimental proof of unconditional secure bit commitment and demonstrates the feasibility of relativistic quantum communication.
Experimental study of rock-breaking with an offset single cone bit
Institute of Scientific and Technical Information of China (English)
Chen Yinghua
2008-01-01
An experimental study of rock-breaking with an offset single cone bit was completed on the bit bench test equipment.Data such as transmission ratio,weight on bit (WOB),rate of penetration (ROP)and torque on bit were acquired in the experiments.Based on analyzing the experimental results,several conclusions were drawn as follows.The transmission ratio of the offset single-cone bit changed slightly with rotary speed of bit,weight on bit and offset distance.The rate of penetration of the offset singlecone bit increased with increase of WOB and offset distance.The torque on bit increased with increase of offset distance under the same WOB and bit rotary speed,decreased with increase of bit rotary speed under the same WOB.The rock-breaking mechanism of the offset single-cone bit was a scraping action.This indicates that the offset single-cone bit is a chipping type bit.
Analysis of bit-rock interaction during stick-slip vibrations using PDC cutting force model
Energy Technology Data Exchange (ETDEWEB)
Patil, P.A.; Teodoriu, C. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany). ITE
2013-08-01
Drillstring vibration is one of the limiting factors maximizing the drilling performance and also causes premature failure of drillstring components. Polycrystalline diamond compact (PDC) bit enhances the overall drilling performance giving the best rate of penetrations with less cost per foot but the PDC bits are more susceptible to the stick slip phenomena which results in high fluctuations of bit rotational speed. Based on the torsional drillstring model developed using Matlab/Simulink for analyzing the parametric influence on stick-slip vibrations due to drilling parameters and drillstring properties, the study of relations between weight on bit, torque on bit, bit speed, rate of penetration and friction coefficient have been analyzed. While drilling with the PDC bits, the bit-rock interaction has been characterized by cutting forces and the frictional forces. The torque on bit and the weight on bit have both the cutting component and the frictional component when resolved in horizontal and vertical direction. The paper considers that the bit is undergoing stick-slip vibrations while analyzing the bit-rock interaction of the PDC bit. The Matlab/Simulink bit-rock interaction model has been developed which gives the average cutting torque, T{sub c}, and friction torque, T{sub f}, values on cutters as well as corresponding average weight transferred by the cutting face, W{sub c}, and the wear flat face, W{sub f}, of the cutters value due to friction.
A NEW LABELING SEARCH ALGORITHM FOR BIT-INTERLEAVED CODED MODULATION WITH ITERATIVE DECODING
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Bit-Interleaved Coded Modulation with Iterative Decoding (BICM-ID) is a bandwidth efficient transmission, where the bit error rate is reduced through the iterative information exchange between the inner demapper and the outer decoder. The choice of the symbol mapping is the crucial design parameter. This paper indicates that the Harmonic Mean of the Minimum Squared Euclidean (HMMSE) distance is the best criterion for the mapping design. Based on the design criterion of the HMMSE distance, a new search algorithm to find the optimized labeling maps for BICM-ID system is proposed. Numerical results and performance comparison show that the new labeling search method has a low complexity and outperforms other labeling schemes using other design criterion in BICM-ID system, therefore it is an optimized labeling method.
On the feedback error compensation for adaptive modulation and coding scheme
Choi, Seyeong
2011-11-25
In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify the performance of two joint AMDC schemes in the presence of feedback error, in terms of the average spectral efficiency, the average number of combined paths, and the average bit error rate. The benefit of feedback error compensation with adaptive combining is also quantified. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. Copyright (c) 2011 John Wiley & Sons, Ltd.
Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit
2008-12-01
Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.
Test results judgment method based on BIT faults
Institute of Scientific and Technical Information of China (English)
Wang Gang; Qiu Jing; Liu Guanjun; Lyu Kehong
2015-01-01
Built-in-test (BIT) is responsible for equipment fault detection, so the test data correct-ness directly influences diagnosis results. Equipment suffers all kinds of environment stresses, such as temperature, vibration, and electromagnetic stress. As embedded testing facility, BIT also suffers from these stresses and the interferences/faults are caused, so that the test course is influenced, resulting in incredible results. Therefore it is necessary to monitor test data and judge test failures. Stress monitor and BIT self-diagnosis would redound to BIT reliability, but the existing anti-jamming researches are mainly safeguard design and signal process. This paper focuses on test results monitor and BIT equipment (BITE) failure judge, and a series of improved approaches is proposed. Firstly the stress influences on components are illustrated and the effects on the diagnosis results are summarized. Secondly a composite BIT program is proposed with information integra-tion, and a stress monitor program is given. Thirdly, based on the detailed analysis of system faults and forms of BIT results, the test sequence control method is proposed. It assists BITE failure judge and reduces error probability. Finally the validation cases prove that these approaches enhance credibility.
Analysis of error performance on Turbo coded FDPIM
Institute of Scientific and Technical Information of China (English)
ZHU Yin-bing; WANG Hong-Xing; ZHANG Tie-Ying
2008-01-01
Due to variable symbol length of digital pulse interval modulation(DPIM), it is difficult to analyze the error performances of Turbo ceded DPIM. To solve this problem, a fixed-length digital pulse interval modulation(FDPIM) method is provided.The FDPIM modulation structure is introduced. The packet error rates of uncoded FDPIM are analyzed and compared with that of DPIM. Bit error rates of Turbo coded FDPIM are simulated based on three kinds of analytical models under weak turbulence channel. The results show that packet error rate of uncoded FDPIM is inferior to that of uncoded DPIM.However, FDPIM is easy to be implemented and easy to be combined, with Turbo code for soft-decision because of its fixed length. Besides, the introduction of Turbo code in this modulation can decrease the average power about 10 dBm,which means that it can improve the error performance of the system effectively.
Reinforcement Learning in BitTorrent Systems
Izhak-Ratzin, Rafit; van der Schaar, Mihaela
2010-01-01
Recent research efforts have shown that the popular BitTorrent protocol does not provide fair resource reciprocation and may allow free-riding. In this paper, we propose a BitTorrent-like protocol that replaces the peer selection mechanisms in the regular BitTorrent protocol with a novel reinforcement learning (RL) based mechanism. Due to the inherent opration of P2P systems, which involves repeated interactions among peers over a long period of time, the peers can efficiently identify free-riders as well as desirable collaborators by learning the behavior of their associated peers. Thus, it can help peers improve their download rates and discourage free-riding, while improving fairness in the system. We model the peers' interactions in the BitTorrent-like network as a repeated interaction game, where we explicitly consider the strategic behavior of the peers. A peer, which applies the RL-based mechanism, uses a partial history of the observations on associated peers' statistical reciprocal behaviors to deter...
Bit Preservation: A Solved Problem?
Directory of Open Access Journals (Sweden)
David S. H. Rosenthal
2010-07-01
Full Text Available For years, discussions of digital preservation have routinely featured comments such as “bit preservation is a solved problem; the real issues are ...”. Indeed, current digital storage technologies are not just astoundingly cheap and capacious, they are astonishingly reliable. Unfortunately, these attributes drive a kind of “Parkinson’s Law” of storage, in which demands continually push beyond the capabilities of systems implementable at an affordable price. This paper is in four parts:Claims, reviewing a typical claim of storage system reliability, showing that it provides no useful information for bit preservation purposes.Theory, proposing “bit half-life” as an initial, if inadequate, measure of bit preservation performance, expressing bit preservation requirements in terms of it, and showing that the requirements being placed on bit preservation systems are so onerous that the experiments required to prove that a solution exists are not feasible.Practice, reviewing recent research into how well actual storage systems preserve bits, showing that they fail to meet the requirements by many orders of magnitude.Policy, suggesting ways of dealing with this unfortunate situation.
Carroll, KJ; Mielke, J; Benet, LZ; Jones, B
2016-01-01
We previously demonstrated pharmacokinetic differences among manufacturing batches of a US Food and Drug Administration (FDA)‐approved dry powder inhalation product (Advair Diskus 100/50) large enough to establish between‐batch bio‐inequivalence. Here, we provide independent confirmation of pharmacokinetic bio‐inequivalence among Advair Diskus 100/50 batches, and quantify residual and between‐batch variance component magnitudes. These variance estimates are used to consider the type I error rate of the FDA's current two‐way crossover design recommendation. When between‐batch pharmacokinetic variability is substantial, the conventional two‐way crossover design cannot accomplish the objectives of FDA's statistical bioequivalence test (i.e., cannot accurately estimate the test/reference ratio and associated confidence interval). The two‐way crossover, which ignores between‐batch pharmacokinetic variability, yields an artificially narrow confidence interval on the product comparison. The unavoidable consequence is type I error rate inflation, to ∼25%, when between‐batch pharmacokinetic variability is nonzero. This risk of a false bioequivalence conclusion is substantially higher than asserted by regulators as acceptable consumer risk (5%). PMID:27727445
Directory of Open Access Journals (Sweden)
Demirhan Erdal
2015-01-01
Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.
Directory of Open Access Journals (Sweden)
Johanna I Westbrook
2012-01-01
Full Text Available BACKGROUND: Considerable investments are being made in commercial electronic prescribing systems (e-prescribing in many countries. Few studies have measured or evaluated their effectiveness at reducing prescribing error rates, and interactions between system design and errors are not well understood, despite increasing concerns regarding new errors associated with system use. This study evaluated the effectiveness of two commercial e-prescribing systems in reducing prescribing error rates and their propensities for introducing new types of error. METHODS AND RESULTS: We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders and clinical (e.g., wrong dose, wrong drug errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious by hospital and study period; and rates and categories of postintervention "system-related" errors (where system functionality or design contributed to the error were calculated. Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards (respectively reductions of 66.1% [95% CI 53.9%-78.3%]; 57.5% [33.8%-81.2%]; and 60.5% [48.5%-72.4%]. The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission (95% CI 5.23-7.28 to 2.12 (95% CI 1.71-2.54; p<0.0001 and at Hospital B from 3.62 (95% CI 3.30-3.93 to 1.46 (95% CI 1.20-1.73; p<0.0001. This
Error Locked Encoder and Decoder for Nanomemory Application
Directory of Open Access Journals (Sweden)
Y. Sharath
2014-03-01
Full Text Available Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. We introduce a new approach to design fault-secure encoder and decoder circuitry for memory designs. The key novel contribution of this paper is identifying and defining a new class of error-correcting codes whose redundancy makes the design of fault-secure detectors (FSD particularly simple. We further quantify the importance of protecting encoder and decoder circuitry against transient errors, illustrating a scenario where the system failure rate (FIT is dominated by the failure rate of the encoder and decoder. We prove that Euclidean Geometry Low-Density Parity-Check (EG-LDPC codes have the fault-secure detector capability. Using some of the smaller EG-LDPC codes, we can tolerate bit or nanowire defect rates of 10% and fault rates of 10-18 upsets/device/cycle, achieving a FIT rate at or below one for the entire memory system and a memory density of 1011 bit/cm with nanowire pitch of 10 nm for memory blocks of 10 Mb or larger. Larger EG-LDPC codes can achieve even higher reliability and lower area overhead.
Assessment of error rates in acoustic monitoring with the R package monitoR
Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese
2016-01-01
Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were for song event detection.
Simulation of DA DCT Using ECAT for Reducing the Truncation Errors
Directory of Open Access Journals (Sweden)
K. V. S. P. Pravallika
2013-03-01
Full Text Available Discrete cosine transform (DCT is widely used in image and video compression applications. The paper mainly deals with implementation of image compression application based on Distributed Arithmetic (DA DCT using Error Compensated Adder Tree (ECAT and simulating it to achieve low error rate. Distributed Arithmetic (DA based Error Compensated Adder Tree (ECAT is operates shifting and addition in parallel instead of using multipliers where the complexity is reduced. The proposed architecture deals with 9 bit input and 12 bit output where it meets the Peak Signal to Noise Ratio (PSNR requirements. Advantages of ECAT based DA-DCT are low error rate and improved speed in image and video compression applications. The project is implemented in Verilog HDL language and simulated in ModelSim XE III 6.4b. The project synthesis is done using Xilinx ISE 10.1. The results obtained were evaluated with the help of MATLAB
Bits and q-bits as versatility measures
Directory of Open Access Journals (Sweden)
José R.C. Piqueira
2004-06-01
Full Text Available Using Shannon information theory is a common strategy to measure any kind of variability in a signal or phenomenon. Some methods were developed to adapt information entropy measures to bird song data trying to emphasize its versatility aspect. This classical approach, using the concept of bit, produces interesting results. Now, the original idea developed in this paper is to use the quantum information theory and the quantum bit (q-bit concept in order to provide a more complete vision of the experimental results.Usar a teoria da informação de Shannon é uma estratégia comum para medir todo tipo de variabilidade em um sinal ou fenômeno. Alguns métodos foram desenvolvidos para adaptar a medida de entropia informacional a dados de cantos de pássaro, tentando enfatizar seus aspectos de versatilidade. Essa abordagem clássica, usando o conceito de bit, produz resultados interessantes. Agora, a idéia original desenvolvida neste artigo é usar a teoria quântica da informação e o conceito de q-bit, com a finalidade de proporcionar uma visão mais completa dos resultados experimentais.
Hash Based Least Significant Bit Technique For Video Steganography
Directory of Open Access Journals (Sweden)
Prof. Dr. P. R. Deshmukh ,
2014-01-01
Full Text Available The Hash Based Least Significant Bit Technique For Video Steganography deals with hiding secret message or information within a video.Steganography is nothing but the covered writing it includes process that conceals information within other data and also conceals the fact that a secret message is being sent.Steganography is the art of secret communication or the science of invisible communication. In this paper a Hash based least significant bit technique for video steganography has been proposed whose main goal is to embed a secret information in a particular video file and then extract it using a stego key or password. In this Least Significant Bit insertion method is used for steganography so as to embed data in cover video with change in the lower bit.This LSB insertion is not visible.Data hidding is the process of embedding information in a video without changing its perceptual quality. The proposed method involve with two terms that are Peak Signal to Noise Ratio (PSNR and the Mean Square Error (MSE .This two terms measured between the original video files and steganographic video files from all video frames where a distortion is measured using PSNR. A hash function is used to select the particular position for insertion of bits of secret message in LSB bits.
Error performance analysis of coherent DCSK chaotic digital communication systems
Institute of Scientific and Technical Information of China (English)
李玉霞; 吴百海; 王光义; 丘水生
2004-01-01
An approach to calculating the approximate bit error rates for coherent DCSK in an AWGN channel is presented. By using logistic as the chaos generator and assuming ideal synchronization at the receiver, the BER of coherent DCSK in an AWGN channel is derived. The theoretical noise performance of coherent DCSK is again derived and is equal to that of coherent FSK. By computing and simulating, the theoretical BERs agree well with the simulated results.
The Error-Pattern-Correcting Turbo Equalizer
Alhussien, Hakim
2010-01-01
The error-pattern correcting code (EPCC) is incorporated in the design of a turbo equalizer (TE) with aim to correct dominant error events of the inter-symbol interference (ISI) channel at the output of its matching Viterbi detector. By targeting the low Hamming-weight interleaved errors of the outer convolutional code, which are responsible for low Euclidean-weight errors in the Viterbi trellis, the turbo equalizer with an error-pattern correcting code (TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the conventional non-precoded TE, especially for high rate applications. A maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise ratio (SNR) gain for various channel conditions and design parameters. In addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is compared to demonstrate the present TE's superiority for short interleaver lengths and high coding rates.
Silicon chip based wavelength conversion of ultra-high repetition rate data signals
DEFF Research Database (Denmark)
Hu, Hao; Ji, Hua; Galili, Michael
2011-01-01
We report on all-optical wavelength conversion of 160, 320 and 640 Gbit/s line-rate data signals using four-wave mixing in a 3.6 mm long silicon waveguide. Bit error rate measurements validate the performance within FEC limits.......We report on all-optical wavelength conversion of 160, 320 and 640 Gbit/s line-rate data signals using four-wave mixing in a 3.6 mm long silicon waveguide. Bit error rate measurements validate the performance within FEC limits....
Eswaran, Krishnan; Ramchandran, Kannan
2008-01-01
A fundamental problem in dynamic frequency reuse is that the cognitive radio is ignorant of the amount of interference it inflicts on the primary license holder. A model for such a situation is proposed and analyzed. The primary sends packets across an erasure channel and employs simple ACK/NAK feedback (ARQs) to retransmit erased packets. Furthermore, its erasure probabilities are influenced by the cognitive radio's activity. While the cognitive radio does not know these interference characteristics, it can eavesdrop on the primary's ARQs. The model leads to strategies in which the cognitive radio adaptively adjusts its input based on the primary's ARQs thereby guaranteeing the primary exceeds a target packet rate. A relatively simple strategy whereby the cognitive radio transmits only when the primary's empirical packet rate exceeds a threshold is shown to have interesting universal properties in the sense that for unknown time-varying interference characteristics, the primary is guaranteed to meet its targ...
AN ERROR-RESILIENT H.263+ CODING SCHEME FOR VIDEO TRANSMISSION OVER WIRELESS NETWORKS
Institute of Scientific and Technical Information of China (English)
Li Jian; Bie Hongxia
2006-01-01
Video transmission over wireless networks has received much attention recently for its restricted bandwidth and high bit-error rate. Based on H.263+, by reversing part stream sequences of each Group Of Block (GOB), an error resilient scheme is presented to improve video robustness without additional bandwidth burden. Error patterns are employed to simulate Wideband Code Division Multiple Access(WCDMA) channels to check out error resilience performances. Simulation results show that both subjective and objective qualities of the reconstructed images are improved remarkably. The mean Peak Signal to Noise Ratio (PSNR)is increased by 0.5dB, and the highest increment is 2dB.
Mukherjee, Pritam; Ulukus, Sennur
2016-01-01
We consider covert communication using a queuing timing channel in the presence of a warden. The covert message is encoded using the inter-arrival times of the packets, and the legitimate receiver and the warden observe the inter-departure times of the packets from their respective queues. The transmitter and the legitimate receiver also share a secret key to facilitate covert communication. We propose achievable schemes that obtain non-zero covert rate for both exponential and general queues...
Directory of Open Access Journals (Sweden)
Philip J Kellman
Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert
Borot de Battisti, M; Denis de Senneville, B; Maenhout, M; Hautvast, G; Binnekamp, D; Lagendijk, J J W; van Vulpen, M; Moerland, M A
2016-03-01
The development of magnetic resonance (MR) guided high dose rate (HDR) brachytherapy for prostate cancer has gained increasing interest for delivering a high tumor dose safely in a single fraction. To support needle placement in the limited workspace inside the closed-bore MRI, a single-needle MR-compatible robot is currently under development at the University Medical Center Utrecht (UMCU). This robotic device taps the needle in a divergent way from a single rotation point into the prostate. With this setup, it is warranted to deliver the irradiation dose by successive insertions of the needle. Although robot-assisted needle placement is expected to be more accurate than manual template-guided insertion, needle positioning errors may occur and are likely to modify the pre-planned dose distribution.In this paper, we propose a dose plan adaptation strategy for HDR prostate brachytherapy with feedback on the needle position: a dose plan is made at the beginning of the interventional procedure and updated after each needle insertion in order to compensate for possible needle positioning errors. The introduced procedure can be used with the single needle MR-compatible robot developed at the UMCU. The proposed feedback strategy was tested by simulating complete HDR procedures with and without feedback on eight patients with different numbers of needle insertions (varying from 4 to 12). In of the cases tested, the number of clinically acceptable plans obtained at the end of the procedure was larger with feedback compared to the situation without feedback. Furthermore, the computation time of the feedback between each insertion was below 100 s which makes it eligible for intra-operative use.
Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E
2014-01-01
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and
Chuanshi Brand Tri-cone Roller Bit
Institute of Scientific and Technical Information of China (English)
Chen Xilong; Shen Zhenzhong; Yuan Xiaoyi
1997-01-01
@@ Compared with other types of bits, the tri-cone roller bit has the advantages of excellent comprehensive performance, low price, wide usage range. It is free of formation limits. The tri-cone roller bit accounts for 90% of the total bits in use. The Chengdu Mechanical Works, as a major manufacturer of petroleum mechanical products and one of the four major tri-cone roller bit factories in China,has produced 120 types of bits in seven series and 19 sizes since 1967. The bits manufactured by the factory are not only sold to the domestic oilfields, but also exported to Japan, Thailand, Indonesia, the Philippines and the Middle East.
A 1.5 bit/s Pipelined Analog-to-Digital Converter Design with Independency of Capacitor Mismatch
Institute of Scientific and Technical Information of China (English)
LI Dan; RONG Men-tian; MAO Jun-fa
2007-01-01
A new technique which is named charge temporary storage technique (CTST) was presented to improve the linearity of a 1.5 bit/s pipelined analog-to-digital converter (ADC).The residual voltage was obtained from the sampling capacitor, and the other capacitor was just a temporary storage of charge.Then, the linearity produced by the mismatch of these capacitors was eliminated without adding extra capacitor error-averaging amplifiers.The simulation results confirmed the high linearity and low dissipation of pipelined ADCs implemented in CTST, so CTST was a new method to implement high resolution, small size ADCs.
The Application Wavelet Transform Algorithm in Testing ADC Effective Number of Bits
Directory of Open Access Journals (Sweden)
Emad A. Awada
2013-10-01
Full Text Available In evaluating Analog to Digital Convertors, many parameters are checked for performance and error rate.One of these parameters is the device Effective Number of Bits. In classical testing of Effective Number ofBits, testing is based on signal to noise components ratio (SNR, whose coefficients are driven viafrequency domain (Fourier Transform of ADC’s output signal. Such a technique is extremely sensitive tonoise and require large number of data samples. That is, longer and more complex testing process as thedevice under test increases in resolutions. Meanwhile, a new time – frequency domain approach (known asWavelet transform is proposed to measure and analyze Analog-to-Digital Converters parameter ofEffective Number of Bits with less complexity and fewer data samples.In this work, the algorithm of Wavelet transform was used to estimate worst case Effective Number of Bitsand compare the new testing results with classical testing methods. Such an algorithm, Wavelet transform,have shown DSP testing process improvement in terms of time and computations complexity based on itsspecial properties of multi-resolutions.
Müller, Amanda
2015-01-01
This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…
Gray Codes with Equal Bit-Error Probabilities.
1981-08-01
CAP31 f 9 FLtEiF C>’ C E ,E C D CF rBRAPF [ 4 F FC I :’E14 1ECL C IP F B A CA 2PACAFF I. ACE 1FCIDC F PACAP 3FiACAPB CDCE T Ult C FPAC .:iF 2BACAFFE...8217 CECFBACA3 ACAE4F~BCE CEIECE’Cb FPACA2AC ABEtICliCE 141ECEICB FEcACA)AC ABFEBCEICE 1ECICE4FI ACA3ACAE( FEBCIICEI E C1ICEPFIAC A2ACAF FP C’CE 151 E CDC& FPAC
Derks, E M; Zwinderman, A H; Gamazon, E R
2017-02-10
Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of FST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of FST. In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.
Zhu, Jin; Wang, Dayan; Xie, Wanqing
2015-02-20
Diversified wavefront deformation is an inevitable phenomenon in intersatellite optical communication systems, which will decrease system performance. In this paper, we investigate the description of wavefront deformation and its influence on the packet error rate (PER) of digital pulse interval modulation (DPIM). With the wavelet method, the diversified wavefront deformation can be described by wavelet parameters: coefficient, dilation, and shift factors, where the coefficient factor represents the depth, dilation factor represents the area, and shift factor is for location. Based on this, the relationship between PER and wavelet parameters is analyzed from a theoretical viewpoint. Numerical results illustrate the validity of theoretical analysis: PER increases with the depth and area and decreases if location gets farther from the center of the optical antenna. In addition to describing diversified deformation, the advantage of the wavelet method over Zernike polynomials in computational complexity is shown via numerical example. This work provides a feasible method for the description along with influence analysis of diversified wavefront deformation from a practical viewpoint and will be helpful for designing optical systems.
Institute of Scientific and Technical Information of China (English)
Matjaz Merc; Igor Drstvensek; Matjaz Vogrin; Tomaz Brajlih; Tomaz Friedrich; Gregor Recnik
2014-01-01
Objective:Free-hand pedicle screw placement has a high incidence of pedicle perforation which can be reduced with fluoroscopy,navigation or an alternative rapid prototyping drill guide template.In our study the error rate of multi-level templates for pedicle screw placement in lumbar and sacral regions was evaluated.Methods:A case series study was performed on 11 patients.Seventy-two screws were implanted using multilevel drill guide templates manufactured with selective laser sintering.According to the optimal screw direction preoperatively defined,an analysis of screw misplacement was performed.Displacement,deviation and screw length difference were measured.The learning curve was also estimated.Results:Twelve screws (17％) were placed more than 3.125 mm out of its optimal position in the centre of pedicle.The tip of the 16 screws (22％) was misplaced more than 6.25 mm out of the predicted optimal position.According to our predefined goal,19 screws (26％) were implanted inaccurately.In 10 cases the screw length was selected incorrectly:1 (1％) screw was too long and 9 (13％) were too short.No clinical signs of neurovascular lesion were observed.Learning curve was insignificantly noticeable (P=0.129).Conclusion:In our study,the procedure of manufacturing and applying multi-level drill guide templates has a 26％ chance of screw misplacement.However,that rate does not coincide with pedicle perforation incidence and neurovascular injury.These facts along with a comparison to compatible studies make it possible to summarize that multi-level templates are satisfactorily accurate and allow precise screw placement with a clinically irrelevant mistake factor.Therefore templates could potentially represent a useful tool for routine pedicle screw placement.
基于压缩感知的低速率语音编码新方案%New low bit rate speech coding scheme based on compressed sensing
Institute of Scientific and Technical Information of China (English)
叶蕾; 杨震; 孙林慧
2011-01-01
利用语音小波高频系数的稀疏性和压缩感知原理,提出一种新的基于压缩感知的低速率语音编码方案,其中小波高频系数的压缩感知重构分别采用l1范数优化方案及码本预测方案进行,前者对大幅度样值重构效果较好,且不仅适用于语音,也适用于音乐信号,具有传统的线性预测编码方法无法比拟的优势,后者对稀疏系数位置的估计较好,且不需要采用压缩感知重构常用的基追踪算法或匹配追踪算法,从而减少了计算量.两种方法的联合使用能发挥各自的优势,使得重构语音的音质进一步改善.%Utilizing the sparsity of high frequency wavelet transform coefficients of speech signal and theory of compressed sensing, a new low bit rate speech coding scheme based on compressed sensing is proposed. The reconstruction of high frequency wavelet transform coefficients is achieved by l1 normal optimization and codebook prediction reconstruction respectively. L1 reconstruction has good effect for large coefficients and suits for both speech and music, with which traditional linear prediction coding cannot compare. Codebook prediction reconstruction has good effect for the location of sparse coefficients and reduces the amount of calculation due to not using basis pursuit or matching pursuit. The combination of these two reconstruction methods can bring the advantages of both methods and improve the quality of the reconstructed speech.
Parity Bit Replenishment for JPEG 2000-Based Video Streaming
Directory of Open Access Journals (Sweden)
François-Olivier Devaux
2009-01-01
Full Text Available This paper envisions coding with side information to design a highly scalable video codec. To achieve fine-grained scalability in terms of resolution, quality, and spatial access as well as temporal access to individual frames, the JPEG 2000 coding algorithm has been considered as the reference algorithm to encode INTRA information, and coding with side information has been envisioned to refresh the blocks that change between two consecutive images of a video sequence. One advantage of coding with side information compared to conventional closed-loop hybrid video coding schemes lies in the fact that parity bits are designed to correct stochastic errors and not to encode deterministic prediction errors. This enables the codec to support some desynchronization between the encoder and the decoder, which is particularly helpful to adapt on the fly pre-encoded content to fluctuating network resources and/or user preferences in terms of regions of interest. Regarding the coding scheme itself, to preserve both quality scalability and compliance to the JPEG 2000 wavelet representation, a particular attention has been devoted to the definition of a practical coding framework able to exploit not only the temporal but also spatial correlation among wavelet subbands coefficients, while computing the parity bits on subsets of wavelet bit-planes. Simulations have shown that compared to pure INTRA-based conditional replenishment solutions, the addition of the parity bits option decreases the transmission cost in terms of bandwidth, while preserving access flexibility.
Classification system adopted for fixed cutter bits
Energy Technology Data Exchange (ETDEWEB)
Winters, W.J.; Doiron, H.H.
1988-01-01
The drilling industry has begun adopting the 1987 International Association of Drilling Contractors' (IADC) method for classifying fixed cutter drill bits. By studying the classification codes on bit records and properly applying the new IADC fixed cutter dull grading system to recently run bits, the end-user should be able to improve the selection and usage of fixed cutter bits. Several users are developing databases for fixed cutter bits in an effort to relate field performance to some of the more prominent bit design characteristics.
Multi-bit quantum random number generation by measuring positions of arrival photons
Yan, Qiurong; Zhao, Baosheng; Liao, Qinghong; Zhou, Nanrun
2014-10-01
We report upon the realization of a novel multi-bit optical quantum random number generator by continuously measuring the arrival positions of photon emitted from a LED using MCP-based WSA photon counting imaging detector. A spatial encoding method is proposed to extract multi-bits random number from the position coordinates of each detected photon. The randomness of bits sequence relies on the intrinsic randomness of the quantum physical processes of photonic emission and subsequent photoelectric conversion. A prototype has been built and the random bit generation rate could reach 8 Mbit/s, with random bit generation efficiency of 16 bits per detected photon. FPGA implementation of Huffman coding is proposed to reduce the bias of raw extracted random bits. The random numbers passed all tests for physical random number generator.
Burgess, Ralph; Yang, Ziheng
2008-09-01
Estimation of population parameters for the common ancestors of humans and the great apes is important in understanding our evolutionary history. In particular, inference of population size for the human-chimpanzee common ancestor may shed light on the process by which the 2 species separated and on whether the human population experienced a severe size reduction in its early evolutionary history. In this study, the Bayesian method of ancestral inference of Rannala and Yang (2003. Bayes estimation of species divergence times and ancestral population sizes using DNA sequences from multiple loci. Genetics. 164:1645-1656) was extended to accommodate variable mutation rates among loci and random species-specific sequencing errors. The model was applied to analyze a genome-wide data set of approximately 15,000 neutral loci (7.4 Mb) aligned for human, chimpanzee, gorilla, orangutan, and macaque. We obtained robust and precise estimates for effective population sizes along the hominoid lineage extending back approximately 30 Myr to the cercopithecoid divergence. The results showed that ancestral populations were 5-10 times larger than modern humans along the entire hominoid lineage. The estimates were robust to the priors used and to model assumptions about recombination. The unusually low X chromosome divergence between human and chimpanzee could not be explained by variation in the male mutation bias or by current models of hybridization and introgression. Instead, our parameter estimates were consistent with a simple instantaneous process for human-chimpanzee speciation but showed a major reduction in X chromosome effective population size peculiar to the human-chimpanzee common ancestor, possibly due to selective sweeps on the X prior to separation of the 2 species.
Flexible Bit Preservation on a National Basis
DEFF Research Database (Denmark)
Jurik, Bolette; Nielsen, Anders Bo; Zierau, Eld
2012-01-01
In this paper we present the results from The Danish National Bit Repository project. The project aim was establishment of a system that can offer flexible and sustainable bit preservation solutions to Danish cultural heritage institutions. Here the bit preservation solutions must include support...... of bit safety as well as other requirements like e.g. confidentiality and availability. The Danish National Bit Repository is motivated by the need to investigate and handle bit preservation for digital cultural heritage. Digital preservation relies on the integrity of the bits which digital material...... consists of, and it is with this focus that the project was initiated. This paper summarizes the requirements for a general system to offer bit preservation to cultural heritage institutions. On this basis the paper describes the resulting flexible system which can support such requirements. The paper...
Errors of measurement by laser goniometer
Agapov, Mikhail Y.; Bournashev, Milhail N.
2000-11-01
The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.
24-Hour Relativistic Bit Commitment
Verbanis, Ephanielle; Martin, Anthony; Houlmann, Raphaël; Boso, Gianluca; Bussières, Félix; Zbinden, Hugo
2016-09-01
Bit commitment is a fundamental cryptographic primitive in which a party wishes to commit a secret bit to another party. Perfect security between mistrustful parties is unfortunately impossible to achieve through the asynchronous exchange of classical and quantum messages. Perfect security can nonetheless be achieved if each party splits into two agents exchanging classical information at times and locations satisfying strict relativistic constraints. A relativistic multiround protocol to achieve this was previously proposed and used to implement a 2-millisecond commitment time. Much longer durations were initially thought to be insecure, but recent theoretical progress showed that this is not so. In this Letter, we report on the implementation of a 24-hour bit commitment solely based on timed high-speed optical communication and fast data processing, with all agents located within the city of Geneva. This duration is more than 6 orders of magnitude longer than before, and we argue that it could be extended to one year and allow much more flexibility on the locations of the agents. Our implementation offers a practical and viable solution for use in applications such as digital signatures, secure voting and honesty-preserving auctions.
A brief review on quantum bit commitment
Almeida, Álvaro J.; Loura, Ricardo; Paunković, Nikola; Silva, Nuno A.; Muga, Nelson J.; Mateus, Paulo; André, Paulo S.; Pinto, Armando N.
2014-08-01
In classical cryptography, the bit commitment scheme is one of the most important primitives. We review the state of the art of bit commitment protocols, emphasizing its main achievements and applications. Next, we present a practical quantum bit commitment scheme, whose security relies on current technological limitations, such as the lack of long-term stable quantum memories. We demonstrate the feasibility of our practical quantum bit commitment protocol and that it can be securely implemented with nowadays technology.
Entanglment assisted zero-error codes
Matthews, William; Mancinska, Laura; Leung, Debbie; Ozols, Maris; Roy, Aidan
2011-03-01
Zero-error information theory studies the transmission of data over noisy communication channels with strictly zero error probability. For classical channels and data, much of the theory can be studied in terms of combinatorial graph properties and is a source of hard open problems in that domain. In recent work, we investigated how entanglement between sender and receiver can be used in this task. We found that entanglement-assisted zero-error codes (which are still naturally studied in terms of graphs) sometimes offer an increased bit rate of zero-error communication even in the large block length limit. The assisted codes that we have constructed are closely related to Kochen-Specker proofs of non-contextuality as studied in the context of foundational physics, and our results on asymptotic rates of assisted zero-error communication yield non-contextuality proofs which are particularly `strong' in a certain quantitive sense. I will also describe formal connections to the multi-prover games known as pseudo-telepathy games.
Modeling for write synchronization in bit patterned media recording
Lin, Maria Yu; Chan, Kheong Sann; Chua, Melissa; Zhang, Songhua; Kui, Cai; Elidrissi, Moulay Rachid
2012-04-01
Bit patterned media recording (BPMR) is a contender for next generation technology after conventional granular magnetic recording (CGMR) can no longer sustain the continued areal density growth. BPMR has several technological hurdles that need to be overcome, among them is solving the problem of write synchronization. With CGMR, grains are randomly distributed and occur almost all over the media. In contrast, BPMR has grains patterned into a regular lattice on the media with an approximate 50% duty cycle. Hence only about a quarter of the area is filled with magnetic material. During writing, the clock must be synchronized to the islands or the written in error rate becomes unacceptably large and the system fails. Maintaining synchronization during writing is a challenge as the system is not able to read and write simultaneously. Hence reading must occur periodically between the writing frequently enough to re-synchronize the writing clock to the islands. In this work, we study the requirements on the lengths of the synchronization and data sectors in a BPMR system using an advanced model for BPMR, and taking into consideration different spindle motor speed variations, which is the main cause of the mis-synchronization.
Development of a jet-assisted polycrystalline diamond drill bit
Energy Technology Data Exchange (ETDEWEB)
Pixton, D.S.; Hall, D.R.; Summers, D.A.; Gertsch, R.E.
1997-12-31
A preliminary investigation has been conducted to evaluate the technical feasibility and potential economic benefits of a new type of drill bit. This bit transmits both rotary and percussive drilling forces to the rock face, and augments this cutting action with high-pressure mud jets. Both the percussive drilling forces and the mud jets are generated down-hole by a mud-actuated hammer. Initial laboratory studies show that rate of penetration increases on the order of a factor of two over unaugmented rotary and/or percussive drilling rates are possible with jet-assistance.
Perceptual importance analysis for H.264/AVC bit allocation
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
The existing H.264/AVC rate control schemes rarely include the perceptual considerations. As a result, the improvements in visual quality are hardly comparable to those in peak signal-to-noise ratio (PSNR). In this paper, we propose a perceptual importance analysis scheme to accurately abstract the spatial and temporal perceptual characteristics of video contents. Then we perform bit allocation at macroblock (MB) level by adopting a perceptual mode decision scheme, which adaptively updates the Lagrangian multiplier for mode decision according to the perceptual importance of each MB. Simulation results show that the proposed scheme can efficiently reduce bit rates without visual quality degradation.
Dispersion Tolerance of 40 Gbaud Multilevel Modulation Formats with up to 3 bits per Symbol
DEFF Research Database (Denmark)
Jensen, Jesper Bevensee; Tokle, Torger; Geng, Yan
2006-01-01
We present numerical and experimental investigations of dispersion tolerance for multilevel phase- and amplitude modulation with up to 3 bits per symbol at a symbol rate of 40 Gbaud......We present numerical and experimental investigations of dispersion tolerance for multilevel phase- and amplitude modulation with up to 3 bits per symbol at a symbol rate of 40 Gbaud...
Energy Technology Data Exchange (ETDEWEB)
Kuehnemann, D.
1990-10-01
The safety of full-face cutting machines has to be assured also in greater depths. In addition, one wants to attain information about the rock conditions as early as possible in the case of occurring faults. For this reason, new systems were to be developed for monitoring of full-face cutting machines. Numerous measuring systems and monitoring systems were used and examined within the framework of the project works. It soon turned out that new systems had to be developed and tested for precise monitoring. Flexible microprocessor systems, sensors and actuators were developed and tested for the respective specific case of application. After several years of development work a new system was developed, which is continuous and transparently from the drill bit area to the surface area. Excavations in the drill bit area are recognized already immediately behind the drift face. Position and quantity of the excavations are transmitted and processed to a central microprocessor by underground ultrasonic sensors. With the aid of specific technologies and newly developed building materials, these excavations are already consolidated and/or saved in the drill bit in front of the dust shield. The cutting rolls are monitored above ground, and the position of the full-face cutting machine is specified above ground. The automatic control via a laser sensor system is also carried out from above ground. In addition to control and monitoring, all other relevant parameters are transmitted from the underground machine to the surface, where they are indicated and recorded. With the technical data which are available in the computer, the data required for optimal heading are determined and transmitted to the control elements of the microprocessors. (orig./HS). [Deutsch] Um die Sicherheit bei Vollschnittmaschinen mit dem Vordringen in groessere Teufen zu gewaehrleisten und um moeglichst fruehe Kenntnisse der vorhandenen Gebirgsverhaeltnisse bei auftretenden Stoerungen zu erlangen waren
Design and realization of a high-speed 12-bit pipelined analog-digital converter IP block
Toprak, Zeynep
2001-01-01
This thesis presents the design, verification, system integration and the physical realization of a monolithic high-speed analog-digital converter (ADC) with 12-bit accuracy. The architecture of the ADC has been realized as a pipelined structure consisting of four pipeline stages, each of which is capable of processing the incoming analog signal with 4-bit accuracy. A bit-overlapping technique has been employed for digital error correction between the pipeline stages so that the influence of ...
Directory of Open Access Journals (Sweden)
Sharmila Vaz
Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.
Performance Analysis of MC-CDMA in the Presence of Carriers Phase Errors
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
This paper presents the effect of carriers phase error on MC-CDMA performance in downlink mobile communications. Signal-to-Noise Ratio (SNR) and Bit-Error-Rate (BER) are analyzed taking into account the effect of carrier phase errors. It is shown that the MC-CDMA system is very sensitive to a carrier frequency offset, the system performance rapidly degrades and strongly depends on the number of carriers. For a maximal load, the degradation caused by carrier phase jitter is independent of the number of the carriers.
Diagnosis of weaknesses in modern error correction codes: a physics approach.
Stepanov, M G; Chernyak, V; Chertkov, M; Vasic, B
2005-11-25
One of the main obstacles to the wider use of the modern error-correction codes is that, due to the complex behavior of their decoding algorithms, no systematic method which would allow characterization of the bit-error-rate (BER) is known. This is especially true at the weak noise where many systems operate and where coding performance is difficult to estimate because of the diminishingly small number of errors. We show how the instanton method of physics allows one to solve the problem of BER analysis in the weak noise range by recasting it as a computationally tractable minimization problem.
A Holistic Approach to Bit Preservation
DEFF Research Database (Denmark)
Zierau, Eld Maj-Britt Olmütz
2011-01-01
for confidentiality, availability, costs, additional to the requirements of ensuring bit safety. A few examples are: • The way that digital material is represented in files and structures has an influence on whether it is possible to interpret and use the bits at a later stage. Consequentially, the way bits represent....... • There will be requirements for the availability of the bit preserved digital material in order to meet requirements on use of the digital material, e.g. libraries often need to give fast access to preserved digital material to the public, i.e. the availability of the bit preserved material must support the use...
An 18-bit high performance audio σ-Δ D/A converter
Hao, Zhang; Xiaowei, Huang; Yan, Han; Cheung, Ray C.; Xiaoxia, Han; Hao, Wang; Guo, Liang
2010-07-01
A multi-bit quantized high performance sigma-delta (σ-Δ) audio DAC is presented. Compared to its single-bit counterpart, the multi-bit quantization offers many advantages, such as simpler σ-Δ modulator circuit, lower clock frequency and smaller spurious tones. With the data weighted average (DWA) mismatch shaping algorithm, element mismatch errors induced by multi-bit quantization can be pushed out of the signal band, hence the noise floor inside the signal band is greatly lowered. To cope with the crosstalk between digital and analog circuits, every analog component is surrounded by a guard ring, which is an innovative attempt. The 18-bit DAC with the above techniques, which is implemented in a 0.18 μm mixed-signal CMOS process, occupies a core area of 1.86 mm2. The measured dynamic range (DR) and peak SNDR are 96 dB and 88 dB, respectively.
An 18-bit high performance audio {sigma}-{Delta} D/A converter
Energy Technology Data Exchange (ETDEWEB)
Zhang Hao; Han Yan; Han Xiaoxia; Wang Hao; Liang Guo [Institute of Microelectronics and Photoelectronics, Zhejiang University, Hangzhou 310027 (China); Huang Xiaowei [CISD, Institute of Microelectronic CAD, Hangzhou 310018 (China); Cheung, Ray C., E-mail: huangxw@hdu.edu.c [Department of Electronic Engineering, City University of Hong Kong (Hong Kong)
2010-07-15
A multi-bit quantized high performance sigma-delta ({sigma}-{Delta}) audio DAC is presented. Compared to its single-bit counterpart, the multi-bit quantization offers many advantages, such as simpler {sigma}-{Delta} modulator circuit, lower clock frequency and smaller spurious tones. With the data weighted average (DWA) mismatch shaping algorithm, element mismatch errors induced by multi-bit quantization can be pushed out of the signal band, hence the noise floor inside the signal band is greatly lowered. To cope with the crosstalk between digital and analog circuits, every analog component is surrounded by a guard ring, which is an innovative attempt. The 18-bit DAC with the above techniques, which is implemented in a 0.18 {mu}m mixed-signal CMOS process, occupies a core area of 1.86 mm{sup 2}. The measured dynamic range (DR) and peak SNDR are 96 dB and 88 dB, respectively.
McLaughlin, Douglas B
2012-01-01
The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors.
Directory of Open Access Journals (Sweden)
P.Rajeepriyanka
2014-08-01
Full Text Available A UART (Universal Asynchronous Receiver and Transmitter is a device allowing the reception and transmission of information, in a serial and asynchronous way. This project focuses on the implementation of UART with status register using multi bit flip-flop and comparing it with UART with status register using single bit flip-flops. During the reception of data, status register indicates parity error, framing error, overrun error and break error. The multi bit flip-flop is indicated in this status register. In modern very large scale integrated circuits, Power reduction and area reduction has become a vital design goal for sophisticated design applications. So in this project the power consumed and area occupied by both multi-bit flip-flop and single bit flip is compared. The underlying idea behind multi-bit flip-flop method is to eliminate total inverter number by sharing the inverters in the flip-flops. Based on the elimination feature of redundant inverters in merging single bit flip-flops into multi bit flip-flops, gives reduction of wired length and this result in reduction of power consumption and area.
PRESAGE: Protecting Structured Address Generation against Soft Errors
Energy Technology Data Exchange (ETDEWEB)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
2016-12-28
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.
Bit threads and holographic entanglement
Freedman, Michael
2016-01-01
The Ryu-Takayanagi (RT) formula relates the entanglement entropy of a region in a holographic theory to the area of a corresponding bulk minimal surface. Using the max flow-min cut principle, a theorem from network theory, we rewrite the RT formula in a way that does not make reference to the minimal surface. Instead, we invoke the notion of a "flow", defined as a divergenceless norm-bounded vector field, or equivalently a set of Planck-thickness "bit threads". The entanglement entropy of a boundary region is given by the maximum flux out of it of any flow, or equivalently the maximum number of bit threads that can emanate from it. The threads thus represent entanglement between points on the boundary, and naturally implement the holographic principle. As we explain, this new picture clarifies several conceptual puzzles surrounding the RT formula. We give flow-based proofs of strong subadditivity and related properties; unlike the ones based on minimal surfaces, these proofs correspond in a transparent manner...
Stability of single skyrmionic bits
Vedmedenko, Olena; Hagemeister, Julian; Romming, Niklas; von Bergmann, Kirsten; Wiesendanger, Roland
The switching between topologically distinct skyrmionic and ferromagnetic states has been proposed as a bit operation for information storage. While long lifetimes of the bits are required for data storage devices, the lifetimes of skyrmions have not been addressed so far. Here we show by means of atomistic Monte Carlo simulations that the field-dependent mean lifetimes of the skyrmionic and ferromagnetic states have a high asymmetry with respect to the critical magnetic field, at which these lifetimes are identical. According to our calculations, the main reason for the enhanced stability of skyrmions is a different field dependence of skyrmionic and ferromagnetic activation energies and a lower attempt frequency of skyrmions rather than the height of energy barriers. We use this knowledge to propose a procedure for the determination of effective material parameters and the quantification of the Monte Carlo timescale from the comparison of theoretical and experimental data. Financial support from the DFG in the framework of the SFB668 is acknowledged.
Bit Threads and Holographic Entanglement
Freedman, Michael; Headrick, Matthew
2016-11-01
The Ryu-Takayanagi (RT) formula relates the entanglement entropy of a region in a holographic theory to the area of a corresponding bulk minimal surface. Using the max flow-min cut principle, a theorem from network theory, we rewrite the RT formula in a way that does not make reference to the minimal surface. Instead, we invoke the notion of a "flow", defined as a divergenceless norm-bounded vector field, or equivalently a set of Planck-thickness "bit threads". The entanglement entropy of a boundary region is given by the maximum flux out of it of any flow, or equivalently the maximum number of bit threads that can emanate from it. The threads thus represent entanglement between points on the boundary, and naturally implement the holographic principle. As we explain, this new picture clarifies several conceptual puzzles surrounding the RT formula. We give flow-based proofs of strong subadditivity and related properties; unlike the ones based on minimal surfaces, these proofs correspond in a transparent manner to the properties' information-theoretic meanings. We also briefly discuss certain technical advantages that the flows offer over minimal surfaces. In a mathematical appendix, we review the max flow-min cut theorem on networks and on Riemannian manifolds, and prove in the network case that the set of max flows varies Lipshitz continuously in the network parameters.
A Novel Digital Background Calibration Technique for 16 bit SHA-less Multibit Pipelined ADC
Directory of Open Access Journals (Sweden)
Swina Narula
2016-01-01
Full Text Available In this paper, a high resolution of 16 bit and high speed of 125MS/s, multibit Pipelined ADC with digital background calibration is presented. In order to achieve low power, SHA-less front end is used with multibit stages. The first and second stages are used here as a 3.5 bit and the stages from third to seventh are of 2.5 bit and last stage is of 3-bit flash ADC. After bit alignment and truncation of total 19 bits, 16 bits are used as final digital output. To precise the remove linear gain error of the residue amplifier and capacitor mismatching error, a digital background calibration technique is used, which is a combination of signal dependent dithering (SDD and butterfly shuffler. To improve settling time of residue amplifier, a special circuit of voltage separation is used. With the proposed digital background calibration technique, the spurious-free dynamic range (SFDR has been improved to 97.74 dB @30 MHz and 88.9 dB @150 MHz, and the signal-to-noise and distortion ratio (SNDR has been improved to 79.77 dB @ 30 MHz, and 73.5 dB @ 150 MHz. The implementation of the Pipelined ADC has been completed with technology parameters of 0.18μm CMOS process with 1.8 V supply. Total power consumption is 300 mW by the proposed ADC.
Influence of Implementation on the Properties of Pseudorandom Number Generators with a Carry Bit
Vattulainen, I; Saarinen, J J; Ala-Nissilä, T
1993-01-01
We present results of extensive statistical and bit level tests on three implementations of a pseudorandom number generator algorithm using the lagged Fibonacci method with an occasional addition of an extra bit. First implementation is the RCARRY generator of James, which uses subtraction. The second is a modified version of it, where a suggested error present in the original implementation has been corrected. The third is our modification of RCARRY such that it utilizes addition of the carry bit. Our results show that there are no significant differences between the performance of these three generators.
Efficient Face Recognition in Video by Bit Planes Slicing
Directory of Open Access Journals (Sweden)
Srinivasa R. Inbathini
2012-01-01
Full Text Available Problem statement: Video-based face recognition must be able to overcome the imaging interference such as pose and illumination. Approach: A model is designed to study for face recognition based on video sequence and also test image. In training stage, single frontal image is taken as a input to the recognition system. A new virtual image is generated using bit plane feature fusion to effectively reduce the sensitivity to illumination variances. A Self-PCA is performed to get each set of Eigen faces and to get projected image. In recognition stage, automatic face detection scheme is first applied to the video sequences. Frames are extracted from the video and virtual frame is created. Each bit plane of test face is extracted and then the feature fusion face is constructed, followed by the projection and reconstruction using each set of the corresponding Eigen faces. Results: This algorithm is compared with conventional PCA algorithm. The minimum error of reconstruction is calculated. If error is less than a threshold value, then it recognizes the face from the database. Conclusion: Bit plane slicing mechanism is applied in video based face recognition. Experimental results shows that its far more superior than conventional method under various pose and illumination condition.
14-bit pipeline-SAR ADC for image sensor readout circuits
Wang, Gengyun; Peng, Can; Liu, Tianzhao; Ma, Cheng; Ding, Ning; Chang, Yuchun
2015-03-01
A two stage 14bit pipeline-SAR analog-to-digital converter includes a 5.5bit zero-crossing MDAC and a 9bit asynchronous SAR ADC for image sensor readout circuits built in 0.18um CMOS process is described with low power dissipation as well as small chip area. In this design, we employ comparators instead of high gain and high bandwidth amplifier, which consumes as low as 20mW of power to achieve the sampling rate of 40MSps and 14bit resolution.
EXACT ERROR PROBABILITY OF ORTHOGONAL SPACE-TIME BLOCK CODES OVER FLAT FADING CHANNELS
Institute of Scientific and Technical Information of China (English)
Xu Feng; Yue Dianwu
2007-01-01
Space time block coding is a modulation scheme recently discovered for the transmit antenna diversity to combat the effects of wireless fading channels. Using the equivalent Single-Input Single-Output (SISO) model, this paper presents closed-form expressions for the exact Symbol Error Rate (SER) and Bit Error Rate (BER) of Orthogonal Space-Time Block Codes (OSTBCs) with M-ary Phase-Shift Keying (MPSK) and M-ary Quadrature Amplitude Modulation (MQAM) over flat uncorrelated Nakagami-m and Ricean fading channels.
DEFF Research Database (Denmark)
Sabra, Jakob Borrits; Andersen, Hans Jørgen
The digital spheres of Information and Communication Technologies (ICT) and Social Network Services (SNS) are influencing 21st. century death. Today the dying and the bereaved attend mourning and remembrance both online and offline. Combined, the cemeteries, web memorials and social network sites...... designs'. Urns, coffins, graves, cemeteries, memorials, monuments, websites, applications and software services, whether cut in stone or made of bits, are all influenced by discourses of publics, economics, power, technology and culture. Designers, programmers, stakeholders and potential end-users often...... do not recognize the need or potential of working with or using, specific 'death-services/products', since they find little or no comfort in contemplating, working or playing around with the concept of death and its life changing consequences. Especially not while being alive and well...
Bit and Power Loading Approach for Broadband Multi-Antenna OFDM System
DEFF Research Database (Denmark)
Rahman, Muhammad Imadur; Das, Suvra S.; Wang, Yuanye
2007-01-01
In this work, we have studied bit and power allocation strategies for multi-antenna assisted Orthogonal Frequency Division Multiplexing (OFDM) systems and investigated the impact of different rates of bit and power allocations on various multi-antenna diversity schemes. It is observed that, if we...... allocations across OFDM sub-channels are required together for efficient exploitation of wireless channel....
Energy Technology Data Exchange (ETDEWEB)
Wu, Kesheng
2007-08-02
An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. The compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.
Bit-Based Joint Source-Channel Decoding of Huffman Encoded Markov Multiple Sources
Directory of Open Access Journals (Sweden)
Weiwei Xiang
2010-04-01
Full Text Available Multimedia transmission over time-varying channels such as wireless channels has recently motivated the research on the joint source-channel technique. In this paper, we present a method for joint source-channel soft decision decoding of Huffman encoded multiple sources. By exploiting the a priori bit probabilities in multiple sources, the decoding performance is greatly improved. Compared with the single source decoding scheme addressed by Marion Jeanne, the proposed technique is more practical in wideband wireless communications. Simulation results show our new method obtains substantial improvements with a minor increasing of complexity. For two sources, the gain in SNR is around 1.5dB by using convolutional codes when symbol-error rate (SER reaches 10-2 and around 2dB by using Turbo codes.
[Survey in hospitals. Nursing errors, error culture and error management].
Habermann, Monika; Cramer, Henning
2010-09-01
Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.
Soft Error Vulnerability of Iterative Linear Algebra Methods
Energy Technology Data Exchange (ETDEWEB)
Bronevetsky, G; de Supinski, B
2007-12-15
Devices become increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft errors primarily caused problems for space and high-atmospheric computing applications. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming significant even at terrestrial altitudes. The soft error vulnerability of iterative linear algebra methods, which many scientific applications use, is a critical aspect of the overall application vulnerability. These methods are often considered invulnerable to many soft errors because they converge from an imprecise solution to a precise one. However, we show that iterative methods can be vulnerable to soft errors, with a high rate of silent data corruptions. We quantify this vulnerability, with algorithms generating up to 8.5% erroneous results when subjected to a single bit-flip. Further, we show that detecting soft errors in an iterative method depends on its detailed convergence properties and requires more complex mechanisms than simply checking the residual. Finally, we explore inexpensive techniques to tolerate soft errors in these methods.
Bustamante, Dulce M; Lord, Cynthia C
2010-06-01
Infection rate is an estimate of the prevalence of arbovirus infection in a mosquito population. It is assumed that when infection rate increases, the risk of arbovirus transmission to humans and animals also increases. We examined some of the factors that can invalidate this assumption. First, we used a model to illustrate how the proportion of mosquitoes capable of virus transmission, or infectious, is not a constant fraction of the number of infected mosquitoes. Thus, infection rate is not always a straightforward indicator of risk. Second, we used a model that simulated the process of mosquito sampling, pooling, and virus testing and found that mosquito infection rates commonly underestimate the prevalence of arbovirus infection in a mosquito population. Infection rate should always be used in conjunction with other surveillance indicators (mosquito population size, age structure, weather) and historical baseline data when assessing the risk of arbovirus transmission.
Error Rates of M-PAM and M-QAM in Generalized Fading and Generalized Gaussian Noise Environments
Soury, Hamza
2013-07-01
This letter investigates the average symbol error probability (ASEP) of pulse amplitude modulation and quadrature amplitude modulation coherent signaling over flat fading channels subject to additive white generalized Gaussian noise. The new ASEP results are derived in a generic closed-form in terms of the Fox H function and the bivariate Fox H function for the extended generalized-K fading case. The utility of this new general closed-form is that it includes some special fading distributions, like the Generalized-K, Nakagami-m, and Rayleigh fading and special noise distributions such as Gaussian and Laplacian. Some of these special cases are also treated and are shown to yield simplified results.
Purpose-built PDC bit successfully drills 7-in liner equipment and formation: An integrated solution
Energy Technology Data Exchange (ETDEWEB)
Puennel, J.G.A.; Huppertz, A.; Huizing, J. [and others
1996-12-31
Historically, drilling out the 7-in, liner equipment has been a time consuming operation with a limited success ratio. The success of the operation is highly dependent on the type of drill bit employed. Tungsten carbide mills and mill tooth rock bits required from 7.5 to 11.5 hours respectively to drill the pack-off bushings, landing collar, shoe track and shoe. Rates of penetration dropped dramatically when drilling the float equipment. While conventional PDC bits have drilled the liner equipment successfully (averaging 9.7 hours), severe bit damage invariably prevented them from continuing to drill the formation at cost-effective penetration rates. This paper describes the integrated development and application of an IADC M433 Class PDC bit, which was designed specifically to drill out the 7-in. liner equipment and continue drilling the formation at satisfactory penetration rates. The development was the result of a joint investigation There the operator and bit/liner manufacturers shared their expertise in solving a drilling problem, The heavy-set bit was developed following drill-off tests conducted to investigate the drillability of the 7-in. liner equipment. Key features of the new bit and its application onshore The Netherlands will be presented and analyzed.
Energy Technology Data Exchange (ETDEWEB)
Croft, Stephen [Oak Ridge National Laboratory (ORNL), One Bethel Valley Road, Oak Ridge, TN (United States); Burr, Tom [International Atomic Energy Agency (IAEA), Vienna (Austria); Favalli, Andrea [Los Alamos National Laboratory (LANL), MS E540, Los Alamos, NM 87545 (United States); Nicholson, Andrew [Oak Ridge National Laboratory (ORNL), One Bethel Valley Road, Oak Ridge, TN (United States)
2016-03-01
The declared linear density of {sup 238}U and {sup 235}U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of {sup 235}U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to model the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. We find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters.
A family of compatible single- and multimicroprocessor systems with 8-bit and 16-bit Microprocessors
Energy Technology Data Exchange (ETDEWEB)
Brzezinski, J.; Cellary, W.; Kreglewski, J.
1984-10-01
In the paper, a multimicroprocessor system for 8-bit and 16-bit microprocessors is presented. The main assumptions of the project of the presented system are discussed. Different single- and multimicroprocessor structures with 8-bit microprocessors are outlined. A detailed description of two single-board microcomputers and system aspects of different solutions are presented. Finally, an intelligent floppy disk controller is described.
A Novel Error Correcting System Based on Product Codes for Future Magnetic Recording Channels
Van, Vo Tam
2012-01-01
We propose a novel construction of product codes for high-density magnetic recording based on binary low-density parity check (LDPC) codes and binary image of Reed Solomon (RS) codes. Moreover, two novel algorithms are proposed to decode the codes in the presence of both AWGN errors and scattered hard errors (SHEs). Simulation results show that at a bit error rate (bER) of approximately 10^-8, our method allows improving the error performance by approximately 1.9dB compared with that of a hard decision decoder of RS codes of the same length and code rate. For the mixed error channel including random noises and SHEs, the signal-to-noise ratio (SNR) is set at 5dB and 150 to 400 SHEs are randomly generated. The bit error performance of the proposed product code shows a significant improvement over that of equivalent random LDPC codes or serial concatenation of LDPC and RS codes.
Dick, Josef
2010-01-01
We study numerical approximations of integrals $\\int_{[0,1]^s} f(\\bsx) \\,\\mathrm{d} \\bsx$ by averaging the function at some sampling points. Monte Carlo (MC) sampling yields a convergence of the root mean square error (RMSE) of order $N^{-1/2}$ (where $N$ is the number of samples). Quasi-Monte Carlo (QMC) sampling on the other hand achieves a convergence of order $N^{-1+\\varepsilon}$, for any $\\varepsilon >0$. Randomized QMC (RQMC), a combination of MC and QMC, achieves a RMSE of order $N^{-3/2+\\varepsilon}$. A combination of RQMC with local antithetic sampling achieves a convergence of the RMSE of order $N^{-3/2-1/s+\\varepsilon}$ (where $s \\ge 1$ is the dimension). QMC, RQMC and RQMC with local antithetic sampling require that the integrand has some smoothness (for instance, bounded variation). Stronger smoothness assumptions on the integrand do not improve the convergence of the above algorithms further. This paper introduces a new RQMC algorithm, for which we prove that it achieves a convergence of the RMS...
Energy Technology Data Exchange (ETDEWEB)
Beddo, M.E.; Spinka, H.; Underwood, D.G.
1992-08-14
Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.
Borot, Maxence; Denis de Senneville, B; Maenhout, M; Hautvast, G; Binnekamp, D; Lagendijk, J J W; van Vulpen, M; Moerland, M A
2016-01-01
The development of magnetic resonance (MR) guided high dose rate (HDR) brachytherapy for prostate cancer has gained increasing interest for delivering a high tumor dose safely in a single fraction. To support needle placement in the limited workspace inside the closed-bore MRI, a single-needle MR-co
PERFORMANCE OF THE ZERO FORCING PRECODING MIMO BROADCAST SYSTEMS WITH CHANNEL ESTIMATION ERRORS
Institute of Scientific and Technical Information of China (English)
Wang Jing; Liu Zhanli; Wang Yan; You Xiaohu
2007-01-01
In this paper, the effect of channel estimation errors upon the Zero Forcing (ZF) precoding Multiple Input Multiple Output Broadcast (MIMO BC) systems was studied. Based on the two kinds of Gaussian estimation error models, the performance analysis is conducted under different power allocation strategies. Analysis and simulation show that if the covariance of channel estimation errors is independent of the received Signal to Noise Ratio (SNR), imperfect channel knowledge deteriorates the sum capacity and the Bit Error Rate (BER) performance severely. However, under the situation of orthogonal training and the Minimum Mean Square Error (MMSE) channel estimation, the sum capacity and BER performance are consistent with those of the perfect Channel State Information (CSI)with only a performance degradation.
Directory of Open Access Journals (Sweden)
Mohammad Ali Nematollahi
2017-01-01
Full Text Available There are various techniques for speech watermarking based on modifying the linear prediction coefficients (LPCs; however, the estimated and modified LPCs vary from each other even without attacks. Because line spectral frequency (LSF has less sensitivity to watermarking than LPC, watermark bits are embedded into the maximum number of LSFs by applying the least significant bit replacement (LSBR method. To reduce the differences between estimated and modified LPCs, a checking loop is added to minimize the watermark extraction error. Experimental results show that the proposed semifragile speech watermarking method can provide high imperceptibility and that any manipulation of the watermark signal destroys the watermark bits since manipulation changes it to a random stream of bits.
Estimating Hardness from the USDC Tool-Bit Temperature Rise
Bar-Cohen, Yoseph; Sherrit, Stewart
2008-01-01
A method of real-time quantification of the hardness of a rock or similar material involves measurement of the temperature, as a function of time, of the tool bit of an ultrasonic/sonic drill corer (USDC) that is being used to drill into the material. The method is based on the idea that, other things being about equal, the rate of rise of temperature and the maximum temperature reached during drilling increase with the hardness of the drilled material. In this method, the temperature is measured by means of a thermocouple embedded in the USDC tool bit near the drilling tip. The hardness of the drilled material can then be determined through correlation of the temperature-rise-versus-time data with time-dependent temperature rises determined in finite-element simulations of, and/or experiments on, drilling at various known rates of advance or known power levels through materials of known hardness. The figure presents an example of empirical temperature-versus-time data for a particular 3.6-mm USDC bit, driven at an average power somewhat below 40 W, drilling through materials of various hardness levels. The temperature readings from within a USDC tool bit can also be used for purposes other than estimating the hardness of the drilled material. For example, they can be especially useful as feedback to control the driving power to prevent thermal damage to the drilled material, the drill bit, or both. In the case of drilling through ice, the temperature readings could be used as a guide to maintaining sufficient drive power to prevent jamming of the drill by preventing refreezing of melted ice in contact with the drill.
Directory of Open Access Journals (Sweden)
Juan Mario Torres Nova
2010-05-01
Full Text Available Gaussian minimum shift keying (GMSK and differential binary phase shift keying (DBPSK are two digital modulation schemes which are -frequently used in radio communication systems; however, there is interdependence in the use of its benefits (spectral efficiency, low bit error rate, low inter symbol interference, etc. Optimising one parameter creates problems for another; for example, the GMSK scheme succeeds in reducing bandwidth when introducing a Gaussian filter into an MSK (minimum shift ke-ying modulator in exchange for increasing inter-symbol interference in the system. The DBPSK scheme leads to lower error pro-bability, occupying more bandwidth; it likewise facilitates synchronous data transmission due to the receiver’s bit delay when re-covering a signal.
Lee, Jaeyoon; Yoon, Dongweon; Park, Sang Kyu
Recently, we provided closed-form expressions involving two-dimensional (2-D) joint Gaussian Q-function for the symbol error rate (SER) and bit error rate (BER) of an arbitrary 2-D signal with I/Q unbalances over an additive white Gaussian noise (AWGN) channel [1]. In this letter, we extend the expressions to Nakagami-m fading channels. Using Craig representation of the 2-D joint Gaussian Q-function, we derive an exact and general expression for the error probabilities of arbitrary 2-D signaling with I/Q phase and amplitude unbalances over Nakagami-m fading channels.
PERBANDINGAN APLIKASI MENGGUNAKAN METODE CAMELLIA 128 BIT KEY DAN 256 BIT KEY
Directory of Open Access Journals (Sweden)
Lanny Sutanto
2014-01-01
Full Text Available The rapid development of the Internet today to easily exchange data. This leads to high levels of risk in the data piracy. One of the ways to secure data is using cryptography camellia. Camellia is known as a method that has the encryption and decryption time is fast. Camellia method has three kinds of scale key is 128 bit, 192 bit, and 256 bit.This application is created using the C++ programming language and using visual studio 2010 GUI. This research compare the smallest and largest key size used on the file extension .Txt, .Doc, .Docx, .Jpg, .Mp4, .Mkv and .Flv. This application is made to comparing time and level of security in the use of 128-bit key and 256 bits. The comparison is done by comparing the results of the security value of avalanche effect 128 bit key and 256 bit key.
LENUS (Irish Health Repository)
Chadwick, Liam
2012-03-12
Health Care Failure Modes and Effects Analysis (HFMEA®) is an established tool for risk assessment in health care. A number of deficiencies have been identified in the method. A new method called Systems and Error Analysis Bundle for Health Care (SEABH) was developed to address these deficiencies. SEABH has been applied to a number of medical processes as part of its validation and testing. One of these, Low Dose Rate (LDR) prostate Brachytherapy is reported in this paper. The case study supported the validity of SEABH with respect to its capacity to address the weaknesses of (HFMEA®).
Analysis of Errors of Deep Space X-Band Range-Rate Measurement%深空X频段测速数据误差分析
Institute of Scientific and Technical Information of China (English)
樊敏; 王宏; 李海涛; 赵华
2013-01-01
X-band is the primary frequency band used by deep space TT&C (Tracking, Telemetry and Command) systems. X-band range-rate measurement is more accurate than those of S-band as validated in X-band deep space TT&C system experiments of Chang'E-2 spacecraft. The precision of range-rate measurement is about 1 mm/s. For X-band range-rate, theoretical error caused by Doppler effect approximate calculation formula is analyzed. This error could become 1 cm/s during translunar and lunar-orbiting phases. Furthermore, measurement residual error is analyzed based on the precision ephemerides of post orbit determination for X-band deep space TT&C system experiment of Chang'E-2 spacecraft. The results show that the range-rate residual error induced by the approximation increases by 1 mm/s compared to what is calculated by equations. It is close to the actual measurement precision. Therefore, the Doppler effect approximate calculation formula is no longer applicable and the exact formula should be used in the lunar and deep space exploration projects in the future.%X频段是深空测控的主用频段,其多普勒测速精度远高于S频段,这一结论在“嫦娥二号”任务X频段深空测控技术试验中得到了验证,测速精度约为1 mm/s.针对X频段高精度测速,本文分析了目前采用的径向速度近似计算公式,理论分析其产生的误差在地月转移和环月轨道段可达1 cm/s.通过“嫦娥二号”任务X频段测控技术试验,以事后精密轨道为基准进行残差分析,结果表明,相比精确公式,近似公式计算测速数据的残差会增加1 mm/s,已与X频段测速精度本身相当,因此,多普勒测速近似计算在X频段测量中已不再适用,应使用本文中列出的精确计算公式.
Steganography forensics method for detecting least significant bit replacement attack
Wang, Xiaofeng; Wei, Chengcheng; Han, Xiao
2015-01-01
We present an image forensics method to detect least significant bit replacement steganography attack. The proposed method provides fine-grained forensics features by using the hierarchical structure that combines pixels correlation and bit-planes correlation. This is achieved via bit-plane decomposition and difference matrices between the least significant bit-plane and each one of the others. Generated forensics features provide the susceptibility (changeability) that will be drastically altered when the cover image is embedded with data to form a stego image. We developed a statistical model based on the forensics features and used least square support vector machine as a classifier to distinguish stego images from cover images. Experimental results show that the proposed method provides the following advantages. (1) The detection rate is noticeably higher than that of some existing methods. (2) It has the expected stability. (3) It is robust for content-preserving manipulations, such as JPEG compression, adding noise, filtering, etc. (4) The proposed method provides satisfactory generalization capability.
Energy Technology Data Exchange (ETDEWEB)
Noyes, H.P.
1990-01-29
We construct discrete space-time coordinates separated by the Lorentz-invariant intervals h/mc in space and h/mc{sup 2} in time using discrimination (XOR) between pairs of independently generated bit-strings; we prove that if this space is homogeneous and isotropic, it can have only 1, 2 or 3 spacial dimensions once we have related time to a global ordering operator. On this space we construct exact combinatorial expressions for free particle wave functions taking proper account of the interference between indistinguishable alternative paths created by the construction. Because the end-points of the paths are fixed, they specify completed processes; our wave functions are born collapsed''. A convenient way to represent this model is in terms of complex amplitudes whose squares give the probability for a particular set of observable processes to be completed. For distances much greater than h/mc and times much greater than h/mc{sup 2} our wave functions can be approximated by solutions of the free particle Dirac and Klein-Gordon equations. Using a eight-counter paradigm we relate this construction to scattering experiments involving four distinguishable particles, and indicate how this can be used to calculate electromagnetic and weak scattering processes. We derive a non-perturbative formula relating relativistic bound and resonant state energies to mass ratios and coupling constants, equivalent to our earlier derivation of the Bohr relativistic formula for hydrogen. Using the Fermi-Yang model of the pion as a relativistic bound state containing a nucleon-antinucleon pair, we find that (G{sub {pi}N}{sup 2}){sup 2} = (2m{sub N}/m{sub {pi}}){sup 2} {minus} 1. 21 refs., 1 fig.
Directory of Open Access Journals (Sweden)
Arthur W Pightling
Full Text Available The wide availability of whole-genome sequencing (WGS and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i depth of sequencing coverage, ii choice of reference-guided short-read sequence assembler, iii choice of reference genome, and iv whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT, using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming. We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers
Outage probability of a relay strategy allowing intra-link errors utilizing Slepian-Wolf theorem
Cheng, Meng; Anwar, Khoirul; Matsumoto, Tad
2013-12-01
In conventional decode-and-forward (DF) one-way relay systems, a data block received at the relay node is discarded, if the information part is found to have errors after decoding. Such errors are referred to as intra-link errors in this article. However, in a setup where the relay forwards data blocks despite possible intra-link errors, the two data blocks, one from the source node and the other from the relay node, are highly correlated because they were transmitted from the same source. In this article, we focus on the outage probability analysis of such a relay transmission system, where source-destination and relay-destination links, Link 1 and Link 2, respectively, are assumed to suffer from the correlated fading variation due to block Rayleigh fading. The intra-link is assumed to be represented by a simple bit-flipping model, where some of the information bits recovered at the relay node are the flipped version of their corresponding original information bits at the source. The correlated bit streams are encoded separately by the source and relay nodes, and transmitted block-by-block to a common destination using different time slots, where the information sequence transmitted over Link 2 may be a noise-corrupted interleaved version of the original sequence. The joint decoding takes place at the destination by exploiting the correlation knowledge of the intra-link (source-relay link). It is shown that the outage probability of the proposed transmission technique can be expressed by a set of double integrals over the admissible rate range, given by the Slepian-Wolf theorem, with respect to the probability density function ( pdf) of the instantaneous signal-to-noise power ratios (SNR) of Link 1 and Link 2. It is found that, with the Slepian-Wolf relay technique, so far as the correlation ρ of the complex fading variation is | ρ|outage curve converges to 1 when the bit streams are not fully correlated. Moreover, the Slepian-Wolf outage probability is proved
A holistic approach to bit preservation
DEFF Research Database (Denmark)
Zierau, Eld
2012-01-01
Purpose: The purpose of this paper is to point out the importance of taking a holistic approach to bit preservation when setting out to find an optimal bit preservation solution for specific digital materials. In the last decade there has been an increasing awareness that bit preservation, which...... preservation strategies as well as pointing to how such strategies can be evaluated. Research limitations/implications The operational results described here are still missing work to be fully operational. However, the holistic approach is in itself an important result. Furthermore, in spite...
Noise, errors and information in quantum amplification
D'Ariano, G M; Maccone, L
1997-01-01
We analyze and compare the characterization of a quantum device in terms of noise, transmitted bit-error-rate (BER) and mutual information, showing how the noise description is meaningful only for Gaussian channels. After reviewing the description of a quantum communication channel, we study the insertion of an amplifier. We focus attention on the case of direct detection, where the linear amplifier has a 3 decibels noise figure, which is usually considered an unsurpassable limit, referred to as the standard quantum limit (SQL). Both noise and BER could be reduced using an ideal amplifier, which is feasible in principle. However, just a reduction of noise beyond the SQL does not generally correspond to an improvement of the BER or of the mutual information. This is the case of a laser amplifier, where saturation can greatly reduce the noise figure, although there is no corresponding improvement of the BER. Such mechanism is illustrated on the basis of Monte Carlo simulations.
Soury, Hamza
2017-03-14
This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have non-line of sight (NLOS) interfering link. Consequently, we study the interferer limited problem that appears between NLOS HD users-pair that are scheduled on the same FD channel. The distribution of the interference is first characterized via its distribution function, which is derived in closed form. Then, a comprehensive performance assessment for the proposed pairing scheme is provided by assuming Extended Generalized-
Gardes, B.; Chabaud, P.-Y.; Guterman, P.
2012-09-01
In the CoRoT exoplanet field of view, photometric measurements are obtained by aperture integration using a generic collection of masks. The total flux held within the photometric mask may be split in two parts, the target flux itself and the flux due to the nearest neighbours considered as contaminants. So far ExoDat (http://cesam.oamp.fr/exodat) gives a rough estimate of the contamination rate for all potential exoplanet targets (level-0) based on generic PSF shapes built before CoRoT launch. Here, we present the updated estimate of the contamination rate (level-1) with its associated error. This estimate is done for each target observed by CoRoT in the exoplanet channel using a new catalog of PSF built from the first available flight images and taking into account the line of sight of the satellite (i.e. the satellite orientation).
Conversion of an 8-bit to a 16-bit Soft-core RISC Processor
Directory of Open Access Journals (Sweden)
Ahmad Jamal Salim
2013-03-01
Full Text Available The demand for 8-bit processors nowadays is still going strong despite efforts by manufacturers in producing higher end microcontroller solutions to the mass market. Low-end processor offers a simple, low-cost and fast solution especially on I/O applications development in embedded system. However, due to architectural constraint, complex calculation could not be performed efficiently on 8-bit processor. This paper presents the conversion method from an 8-bit to a 16-bit Reduced Instruction Set Computer (RISC processor in a soft-core reconfigurable platform in order to extend its capability in handling larger data sets thus enabling intensive calculations process. While the conversion expands the data bus width to 16-bit, it also maintained the simple architecture design of an 8-bit processor.The expansion also provides more room for improvement to the processor’s performance. The modified architecture is successfully simulated in CPUSim together with its new instruction set architecture (ISA. Xilinx Virtex-6 platform is utilized to execute and verified the architecture. Results show that the modified 16-bit RISC architecture only required 17% more register slice on Field Programmable Gate Array (FPGA implementation which is a slight increase compared to the original 8-bit RISC architecture. A test program containing instruction sets that handle 16-bit data are also simulated and verified. As the 16-bit architecture is described as a soft-core, further modifications could be performed in order to customize the architecture to suit any specific applications.
Secure Classical Bit Commitment using Fixed Capacity Communication Channels
Kent, Adrian
1999-01-01
If mutually mistrustful parties A and B control two or more appropriately located sites, special relativity can be used to guarantee that a pair of messages exchanged by A and B are independent. In earlier work, we used this fact to define a relativistic bit commitment protocol, RBC1, in which security is maintained by exchanging a sequence of messages whose transmission rate increases exponentially in time. We define here a new relativistic protocol, RBC2, which requires only a constant tran...
Directory of Open Access Journals (Sweden)
Yongsoon Lee
2009-01-01
Full Text Available This paper implements a field programmable gate array- (FPGA- based face detector using a neural network (NN and the bit-width reduced floating-point arithmetic unit (FPU. The analytical error model, using the maximum relative representation error (MRRE and the average relative representation error (ARRE, is developed to obtain the maximum and average output errors for the bit-width reduced FPUs. After the development of the analytical error model, the bit-width reduced FPUs and an NN are designed using MATLAB and VHDL. Finally, the analytical (MATLAB results, along with the experimental (VHDL results, are compared. The analytical results and the experimental results show conformity of shape. We demonstrate that incremented reductions in the number of bits used can produce significant cost reductions including area, speed, and power.
Binary Error Correcting Network Codes
Wang, Qiwen; Li, Shuo-Yen Robert
2011-01-01
We consider network coding for networks experiencing worst-case bit-flip errors, and argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. We propose a new metric for errors under this model. Using this metric, we prove a new Hamming-type upper bound on the network capacity. We also show a commensurate lower bound based on GV-type codes that can be used for error-correction. The codes used to attain the lower bound are non-coherent (do not require prior knowledge of network topology). The end-to-end nature of our design enables our codes to be overlaid on classical distributed random linear network codes. Further, we free internal nodes from having to implement potentially computationally intensive link-by-link error-correction.
Medium-rate speech coding simulator for mobile satellite systems
Copperi, Maurizio; Perosino, F.; Rusina, F.; Albertengo, G.; Biglieri, E.
1986-01-01
Channel modeling and error protection schemes for speech coding are described. A residual excited linear predictive (RELP) coder for bit rates 4.8, 7.2, and 9.6 kbit/sec is outlined. The coder at 9.6 kbit/sec incorporates a number of channel error protection techniques, such as bit interleaving, error correction codes, and parameter repetition. Results of formal subjective experiments (DRT and DAM tests) under various channel conditions, reveal that the proposed coder outperforms conventional LPC-10 vocoders by 2 subjective categories, thus confirming the suitability of the RELP coder at 9.6 kbit/sec for good quality speech transmission in mobile satellite systems.
An improved adaptive bit allocation algorithm for OFDM system%一种改进的OFD M系统自适应比特分配算法
Institute of Scientific and Technical Information of China (English)
魏巍; 安文东
2014-01-01
An adaptive bit allocation algorithm based on Hughes-Hartogs algorithm is proposed in this paper to improve the shortage of greedy algorithm which requires a large number of itera-tions.Under the constraint of bit error rata and the total number of transmission bit,the im-proved algorithm firstly uses the Chow algorithm to allocate some of the bits,and then uses the greedy algorithm to allocate the remaining bits to each subcarrier.When minimizing the total power by this algorithm,iterations of this algorithm are significantly less than that by the greedy algorithm.Computer simulation results show that,with fixed transmission rate,the iterations number of this improved algorithm is 7 .4%~34% of that of the greedy algorithm,and the per-formance of this algorithm is very close to that of the greedy algorithm.%针对贪婪算法迭代次数多的不足，提出一种基于 Hughes-Hartogs 算法的自适应比特分配算法。在误比特率和传输比特总数限定下，先使用Chow算法对每个子载波进行比特初始分配，然后再把余下的比特通过贪婪算法分配到各个子载波上，使总功率达到最小。仿真结果表明，在传输比特数一定的情况下，改进贪婪算法的迭代次数仅是贪婪算法的7．4％～34％，并且在性能上十分逼近贪婪算法。
A single-ended 10-bit 200 kS/s 607 μW SAR ADC with an auto-zeroing offset cancellation technique
Weiru, Gu; Yimin, Wu; Fan, Ye; Junyan, Ren
2015-10-01
This paper presents a single-ended 8-channel 10-bit 200 kS/s 607 μW synchronous successive approximation register (SAR) analog-to-digital converter (ADC) using HLMC 55 nm low leakage (LL) CMOS technology with a 3.3 V/1.2 V supply voltage. In conventional binary-encoded SAR ADCs the total capacitance grows exponentially with resolution. In this paper a CR hybrid DAC is adopted to reduce both capacitance and core area. The capacitor array resolves 4 bits and the other 6 bits are resolved by the resistor array. The 10-bit data is acquired by thermometer encoding to reduce the probability of DNL errors which are typically present in binary weighted architectures. This paper uses an auto-zeroing offset cancellation technique that can reduce the offset to 0.286 mV. The prototype chip realized the 10-bit SAR ADC fabricated in HLMC 55 nm CMOS technology with a core area of 167 × 87 μm2. It shows a sampling rate of 200 kS/s and low power dissipation of 607 μW operates at a 3.3 V analog supply voltage and a 1.2 V digital supply voltage. At the input frequency of 10 kHz the signal-to-noise-and-distortion ratio (SNDR) is 60.1 dB and the spurious-free dynamic range (SFDR) is 68.1 dB. The measured DNL is +0.37/-0.06 LSB and INL is +0.58/-0.22 LSB. Project supported by the National Science and Technology Support Program of China (No. 2012BAI13B07) and the National Science and Technology Major Project of China (No.2012ZX03001020-003).
The Braid-Based Bit Commitment Protocol
Institute of Scientific and Technical Information of China (English)
WANG Li-cheng; CAO Zhen-fu; CAO Feng; QIAN Hai-feng
2006-01-01
With recent advances of quantum computation, new threats have closed in upon to the classical public key cryptosystems. In order to build more secure bit commitment schemes, this paper gave a survey of the new coming braid-based cryptography and then brought forward the first braid-based bit commitment protocol. The security proof manifests that the proposed protocol is computationally binding and information-theoretically hiding.Furthermore, the proposed protocol is also invulnerable to currently known quantum attacks.
Neural network implementation using bit streams.
Patel, Nitish D; Nguang, Sing Kiong; Coghill, George G
2007-09-01
A new method for the parallel hardware implementation of artificial neural networks (ANNs) using digital techniques is presented. Signals are represented using uniformly weighted single-bit streams. Techniques for generating bit streams from analog or multibit inputs are also presented. This single-bit representation offers significant advantages over multibit representations since they mitigate the fan-in and fan-out issues which are typical to distributed systems. To process these bit streams using ANNs concepts, functional elements which perform summing, scaling, and squashing have been implemented. These elements are modular and have been designed such that they can be easily interconnected. Two new architectures which act as monotonically increasing differentiable nonlinear squashing functions have also been presented. Using these functional elements, a multilayer perceptron (MLP) can be easily constructed. Two examples successfully demonstrate the use of bit streams in the implementation of ANNs. Since every functional element is individually instantiated, the implementation is genuinely parallel. The results clearly show that this bit-stream technique is viable for the hardware implementation of a variety of distributed systems and for ANNs in particular.
Bits of Internet traffic control
Vojnovic, Milan; Le Boudec, Jean Yves
2005-01-01
In this work, we consider four problems in the context of Internet traffic control. The first problem is to understand when and why a sender that implements an equation-based rate control would be TCP-friendly, or not—a sender is said to be TCP-friendly if, under the same operating conditions, its long-term average send rate does not exceed that of a TCP sender. It is an established axiom that some senders in the Internet would need to be TCP-friendly. An equation-based rate control sender pl...
Bits of Internet Traffic Control
Vojnovic, Milan
2003-01-01
In this work, we consider four problems in the context of Internet traffic control. The first problem is to understand when and why a sender that implements an equation-based rate control would be TCP-friendly, or not—a sender is said to be TCP-friendly if, under the same operating conditions, its long-term average send rate does not exceed that of a TCP sender. It is an established axiom that some senders in the Internet would need to be TCP-friendly. An equation-based rate control sender pl...
Error-resilient DNA computation
Energy Technology Data Exchange (ETDEWEB)
Karp, R.M.; Kenyon, C.; Waarts, O. [Univ. of California, Berkeley, CA (United States)
1996-12-31
The DNA model of computation, with test tubes of DNA molecules encoding bit sequences, is based on three primitives, Extract-A-Bit, which splits a test tube into two test tubes according to the value of a particular bit x, Merge-Two-Tubes and Detect-Emptiness. Perfect operations can test the satisfiability of any boolean formula in linear time. However, in reality the Extract operation is faulty; it misclassifies a certain proportion of the strands. We consider the following problem: given an algorithm based on perfect Extract, Merge and Detect operations, convert it to one that works correctly with high probability when the Extract operation is faulty. The fundamental problem in such a conversion is to construct a sequence of faulty Extracts and perfect Merges that simulates a highly reliable Extract operation. We first determine (up to a small constant factor) the minimum number of faulty Extract operations inherently required to simulate a highly reliable Extract operation. We then go on to derive a general method for converting any algorithm based on error-free operations to an error-resilient one, and give optimal error-resilient algorithms for realizing simple n-variable boolean functions such as Conjunction, Disjunction and Parity.
Directory of Open Access Journals (Sweden)
Ana Carolina Souza-Oliveira
Full Text Available Abstract Ventilator-associated pneumonia is the most prevalent nosocomial infection in intensive care units and is associated with high mortality rates (14–70%. Aim This study evaluated factors influencing mortality of patients with Ventilator-associated pneumonia (VAP, including bacterial resistance, prescription errors, and de-escalation of antibiotic therapy. Methods This retrospective study included 120 cases of Ventilator-associated pneumonia admitted to the adult adult intensive care unit of the Federal University of Uberlândia. The chi-square test was used to compare qualitative variables. Student's t-test was used for quantitative variables and multiple logistic regression analysis to identify independent predictors of mortality. Findings De-escalation of antibiotic therapy and resistant bacteria did not influence mortality. Mortality was 4 times and 3 times higher, respectively, in patients who received an inappropriate antibiotic loading dose and in patients whose antibiotic dose was not adjusted for renal function. Multiple logistic regression analysis revealed the incorrect adjustment for renal function was the only independent factor associated with increased mortality. Conclusion Prescription errors influenced mortality of patients with Ventilator-associated pneumonia, underscoring the challenge of proper Ventilator-associated pneumonia treatment, which requires continuous reevaluation to ensure that clinical response to therapy meets expectations.
Asymptotic Properties of One-Bit Distributed Detection with Ordered Transmissions
Braca, Paolo; Matta, Vincenzo
2011-01-01
Consider a sensor network made of remote nodes connected to a common fusion center. In a recent work Blum and Sadler [1] propose the idea of ordered transmissions -sensors with more informative samples deliver their messages first- and prove that optimal detection performance can be achieved using only a subset of the total messages. Taking to one extreme this approach, we show that just a single delivering allows making the detection errors as small as desired, for a sufficiently large network size: a one-bit detection scheme can be asymptotically consistent. The transmission ordering is based on the modulus of some local statistic (MO system). We derive analytical results proving the asymptotic consistency and, for the particular case that the local statistic is the log-likelihood (\\ell-MO system), we also obtain a bound on the error convergence rate. All the theorems are proved under the general setup of random number of sensors. Computer experiments corroborate the analysis and address typical examples of...
Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan
2015-01-01
Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI.
Semi-Blind Error Resilient SLM for PAPR Reduction in OFDM Using Spread Spectrum Codes.
Directory of Open Access Journals (Sweden)
Amr M Elhelw
Full Text Available High peak to average power ratio (PAPR is one of the major problems of OFDM systems. Selected mapping (SLM is a promising choice that can elegantly tackle this problem. Nevertheless, side information (SI index is required to be transmitted which reduces the overall throughput. This paper proposes a semi-blind error resilient SLM system that utilizes spread spectrum codes for embedding the SI index in the transmitted symbols. The codes are embedded in an innovative manner which does not increase the average energy per symbol. The use of such codes allows the correction of probable errors in the SI index detection. A new receiver, which does not require perfect channel state information (CSI for the detection of the SI index and has relatively low computational complexity, is proposed. Simulations results show that the proposed system performs well both in terms SI index detection error and bit error rate.
Influence of pseudorandom bit format on the direct modulation performance of semiconductor lasers
Indian Academy of Sciences (India)
Moustafa Ahmed; Safwat W Z Mahmoud; Alaa A Mohmoud
2012-12-01
This paper investigates the direct gigabit modulation characteristics of semiconductor lasers using the return to zero (RZ) and non-return to zero (NRZ) formats. The modulation characteristics include the frequency chirp, eye diagram, and turn-on jitter (TOJ). The differences in the relative contributions of the intrinsic noise of the laser and the pseudorandom bit-pattern effect to the modulation characteristics are presented. We introduce an approximate estimation to the transient properties that control the digital modulation performance, namely, the modulation bit rate and the minimum (setting) bit rate required to yield a modulated laser signal free from the bit pattern effect. The results showed that the frequency chirp increases with the increase of the modulation current under both RZ and NRZ formats, and decreases remarkably with the increase of the bias current. The chirp is higher under the RZ modulation format than under the NRZ format. When the modulation bit rate is higher than the setting bit rate of the relaxation oscillation, the laser exhibits enhanced TOJ and the eye diagram is partially closed. TOJ decreases with the increase of the bias and/or modulation current for both formats of modulation.
Improved Design of Unequal Error Protection LDPC Codes
Directory of Open Access Journals (Sweden)
Sandberg Sara
2010-01-01
Full Text Available We propose an improved method for designing unequal error protection (UEP low-density parity-check (LDPC codes. The method is based on density evolution. The degree distribution with the best UEP properties is found, under the constraint that the threshold should not exceed the threshold of a non-UEP code plus some threshold offset. For different codeword lengths and different construction algorithms, we search for good threshold offsets for the UEP code design. The choice of the threshold offset is based on the average a posteriori variable node mutual information. Simulations reveal the counter intuitive result that the short-to-medium length codes designed with a suitable threshold offset all outperform the corresponding non-UEP codes in terms of average bit-error rate. The proposed codes are also compared to other UEP-LDPC codes found in the literature.
Vrieze, Scott I; Grove, William M
2008-06-01
The authors demonstrate a statistical bootstrapping method for obtaining unbiased item selection and predictive validity estimates from a scale development sample, using data (N = 256) of Epperson et al. [2003 Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) technical paper: Development, validation, and recommended risk level cut scores. Retrieved November 18, 2006 from Iowa State University Department of Psychology web site: http://www.psychology.iastate.edu/ approximately dle/mnsost_download.htm] from which the Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) was developed. Validity (area under receiver operating characteristic curve) reported by Epperson et al. was .77 with 16 items selected. The present analysis yielded an asymptotically unbiased estimator AUC = .58. The present article also focused on the degree to which sampling error renders estimated cutting scores (appropriate to local [varying] recidivism base rates) nonoptimal, so that the long-run performance (measured by correct fraction, the total proportion of correct classifications) of these estimated cutting scores is poor, when they are applied to their parent populations (having assumed values for AUC and recidivism rate). This was investigated by Monte Carlo simulation over a range of AUC and recidivism rate values. Results indicate that, except for the AUC values higher than have ever been cross-validated, in combination with recidivism base rates severalfold higher than the literature average [Hanson and Morton-Bourgon, 2004, Predictors of sexual recidivism: An updated meta-analysis. (User report 2004-02.). Ottawa: Public Safety and Emergency Preparedness Canada], the user of an instrument similar in performance to the MnSOST-R cannot expect to achieve correct fraction performance notably in excess of what is achievable from knowing the population recidivism rate alone. The authors discuss the legal implications of their findings for procedural and substantive due process in
Institute of Scientific and Technical Information of China (English)
景鑫; 庄奕琪; 汤华莲; 戴力
2013-01-01
设计了一种用于分时长期演进(TD-LTE)系统基带信号处理的12 bit 40 MS/s无校准的流水线模数转换器(ADC).在采样保持前端设计了一种改进的栅压自举开关,有效减少了电路的非线性失真,提高了开关的线性度.设计的ADC采用全2.5 bit/级架构,利用级电路缩减技术满足面积与功耗要求.芯片基于130 nmCMOS(互补金属氧化物半导体)工艺流片验证,电源电压1.2V.实测整个ADC,最大INL(积分非线性)和DNL(微分非线性)误差分别为1.48 LSB(最低有效位)和0.48 LSB.动态特性测试结果表明:在40 MS/s采样频率、-1 dBFS(满度相对电平)、4.3 MHz正弦输入下,设计的模数转换器信噪失真比(SNDR)达到63.55dB,无杂散动态范围(SFDR)达到76.37 dB.整个ADC在40 MS/s全速工作时功耗48 mW,芯片面积(包含Pad)为3.1 mm×1.4 mm.%A 12 bit 40 MS/s calibration-free pipelined analog-to-digital convert (ADC) used in TDLTE(time division long term evolution) baseband was described.A gate-bootstrapping switch's linearity was improved and designed as the input-sampling switches in the front sample-and-hold (S/H) circuit in order to reduce the nonlinearity distortion of the ADC effectively.To achieve low power and small chip size,the pipelined stages were scaled in current and area,and implemented with 2.5 bit/stage architecture.The ADC was fabricated in 130 nm CMOS(complementary metal oxide semiconductor) process with 1.2 V power supply.The results show that the ADC achieves 63.55 dB signal-to-noise and distortion ration (SNDR),76.37 dB spurious free dynamic range (SFDR) under the conditions of 40 MS/s sampling rate and-1 dBFS (dB full scale),4.3 MHz sinusoidal input signal.The measured maximum integral nonlinearities (INL) errors and differential nonlinearities (DNL) errors are 1.48 LSB (least significant bit) and 0.48 LSB at 12 bit level,respectively.The entire ADC consumes 48 mW when operating at 40 MS/s sampling rate and the chip size (including Pad
Low complexity bit loading algorithm for OFDM system
Institute of Scientific and Technical Information of China (English)
Yang Yu; Sha Xuejun; Zhang Zhonghua
2006-01-01
A new approach to loading for orthogonal frequency division multiplexing (OFDM) system is proposed, this bit-loading algorithm assigns bits to different subchannels in order to minimize the transmit energy. In the algorithm,first most bit are allocated to each subchannels according to channel condition, Shannon formula and QoS require of the user, then the residual bit are allocated to the subchannels bit by bit. In this way the algorithm is efficient while calculation is less complex. This is the first time to load bits with the scale following Shannon formula and the algorithm is of O (4N) complexity.
Institute of Scientific and Technical Information of China (English)
孙宇[; 李纯莲; 钟经华
2016-01-01
Braille error tolerance rate includes two aspects: the scheme error tolerance rate corresponding to Braille scheme and the spelling error tolerance rate corresponding to readers.In order to reasonably evaluate the spelling efficiency of Chinese Braille scheme and further improve it, this paper presents a concept of scheme error tolerance rate and makes a statistical analysis on it.The results show that the error tolerance rate is objective necessary and controllable, pointing out the Braille scheme with the greater error tolerance rate will be easier to use and popularize.Finally, it gives an optimization function of scheme error tolerance rate, which is helpful to improve the current Braille scheme.Meanwhile, it discusses the influences of readers′psychological factors on Braille error tolerance rate when reading and reveals the relations of mutual influence, mutual promotion and mutual compensation between the scheme error tolerance rate of Braille scheme and the spelling error tolerance rate of Braille readers.%盲文容错率包括盲文方案的方案容错率和盲文读者的拼读容错率两个方面。为了合理评估汉语盲文方案的拼读效率、进一步改进盲文方案，提出盲文方案的方案容错率概念并对其进行统计学分析，得出容错率存在的必然性和可控性，指出容错率较大的盲文方案较容易使用和推广，最后给出了盲文方案容错率的优化函数以利于改进现有盲文方案。同时还分析了读者在阅读盲文时，其心理因素对盲文容错率的影响，揭示了盲文方案的方案容错率和盲文读者的拼读容错率之间相互影响、相互促进、相互代偿的关系。
Distortion-rate models for entropy-coded lattice vector quantization.
Raffy, P; Antonini, M; Barlaud, M
2000-01-01
The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models.
An Implementation of Error Minimization Data Transmission in OFDM using Modified Convolutional Code
Directory of Open Access Journals (Sweden)
Hendy Briantoro
2016-04-01
Full Text Available This paper presents about error minimization in OFDM system. In conventional system, usually using channel coding such as BCH Code or Convolutional Code. But, performance BCH Code or Convolutional Code is not good in implementation of OFDM System. Error bits of OFDM system without channel coding is 5.77%. Then, we used convolutional code with code rate 1/2, it can reduce error bitsonly up to 3.85%. So, we proposed OFDM system with Modified Convolutional Code. In this implementation, we used Software Define Radio (SDR, namely Universal Software Radio Peripheral (USRP NI 2920 as the transmitter and receiver. The result of OFDM system using Modified Convolutional Code with code rate is able recover all character received so can decrease until 0% error bit. Increasing performance of Modified Convolutional Code is about 1 dB in BER of 10-4 from BCH Code and Convolutional Code. So, performance of Modified Convolutional better than BCH Code or Convolutional Code. Keywords: OFDM, BCH Code, Convolutional Code, Modified Convolutional Code, SDR, USRP
Das, Bikramaditya; 10.5121/jgraphhoc.2010.2104
2010-01-01
For high data rate ultra wideband communication system, performance comparison of Rake, MMSE and Rake-MMSE receivers is attempted in this paper. Further a detail study on Rake-MMSE time domain equalizers is carried out taking into account all the important parameters such as the effect of the number of Rake fingers and equalizer taps on the error rate performance. This receiver combats inter-symbol interference by taking advantages of both the Rake and equalizer structure. The bit error rate performances are investigated using MATLAB simulation on IEEE 802.15.3a defined UWB channel models. Simulation results show that the bit error rate probability of Rake-MMSE receiver is much better than Rake receiver and MMSE equalizer. Study on non-line of sight indoor channel models illustrates that bit error rate performance of Rake-MMSE (both LE and DFE) improves for CM3 model with smaller spread compared to CM4 channel model. It is indicated that for a MMSE equalizer operating at low to medium SNR values, the number o...
A General Rate K/N Convolutional Decoder Based on Neural Networks with Stopping Criterion
Directory of Open Access Journals (Sweden)
Johnny W. H. Kao
2009-01-01
Full Text Available A novel algorithm for decoding a general rate K/N convolutional code based on recurrent neural network (RNN is described and analysed. The algorithm is introduced by outlining the mathematical models of the encoder and decoder. A number of strategies for optimising the iterative decoding process are proposed, and a simulator was also designed in order to compare the Bit Error Rate (BER performance of the RNN decoder with the conventional decoder that is based on Viterbi Algorithm (VA. The simulation results show that this novel algorithm can achieve the same bit error rate and has a lower decoding complexity. Most importantly this algorithm allows parallel signal processing, which increases the decoding speed and accommodates higher data rate transmission. These characteristics are inherited from a neural network structure of the decoder and the iterative nature of the algorithm, that outperform the conventional VA algorithm.
An Efficient Method for Image and Audio Steganography using Least Significant Bit (LSB) Substitution
Chadha, Ankit; Satam, Neha; Sood, Rakshak; Bade, Dattatray
2013-09-01
In order to improve the data hiding in all types of multimedia data formats such as image and audio and to make hidden message imperceptible, a novel method for steganography is introduced in this paper. It is based on Least Significant Bit (LSB) manipulation and inclusion of redundant noise as secret key in the message. This method is applied to data hiding in images. For data hiding in audio, Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) both are used. All the results displayed prove to be time-efficient and effective. Also the algorithm is tested for various numbers of bits. For those values of bits, Mean Square Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) are calculated and plotted. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when nsteganography process does not reveal presence of any hidden message, thus qualifying the criteria of imperceptible message.
Robust Face Recognition using Voting by Bit-plane Images based on Sparse Representation
Directory of Open Access Journals (Sweden)
Dongmei Wei
2015-08-01
Full Text Available Plurality voting is widely employed as combination strategies in pattern recognition. As a technology proposed recently, sparse representation based classification codes the query image as a sparse linear combination of entire training images and classifies the query sample class by class exploiting the class representation error. In this paper, an improvement face recognition approach using sparse representation and plurality voting based on the binary bit-plane images is proposed. After being equalized, gray images are decomposed into eight bit-plane images, sparse representation based classification is exploited respectively on the five bit-plane images that have more discrimination information. Finally, the true identity of query image is voted by these five identities obtained. Experiment results shown that this proposed approach is preferable both in recognition accuracy and in recognition speed.
Encoding M classical bits in the arrival time of dense-coded photons
Hegazy, Salem F; Obayya, Salah S A
2016-01-01
We present a scheme to encode M extra classical bits to a dense-coded pair of photons. By tuning the delay of an entangled pair of photons to one of 2^M time-bins and then applying one of the quantum dense coding protocols, a receiver equipped with a synchronized clock of reference is able to decode M bits (via classical time-bin encoding) + 2 bits (via quantum dense coding). This protocol, yet simple, does not dispense several special features of the used programmable delay apparatus to maintain the coherence of the two-photon state. While this type of time-domain encoding may be thought to be ideally of boundless photonic capacity (by increasing the number of available time-bins), errors due to the environmental noise and the imperfect devices and channel evolve with the number of time-bins.
Field application of a fully rotating point-the-bit rotary steerable system (SPE 67716)
Energy Technology Data Exchange (ETDEWEB)
Schaaf, S.; Pafitis, D. [Schlumberger Canada Ltd., Calgary, AB (Canada)
2001-07-01
Petroleum offshore operations can encounter downhole sliding problems related to maintaining drilling bit orientation, low effective ROP, poor hole cleaning and the inability of the bit to slide. Other problems include differential sticking, buckling, lock-up, high tortuosity, and build-up rate formation sensitive. Several illustrations were presented indicating ways to correct problems with the bit system through better steerability and orientation of the motors. Specifications and field test examples were presented. One of the test examples involved a land operation in California and another involved a vertical well in a strongly dipping formation. The success of the field tests proved the steering concept of an internally off-set drive-shaft. The point-the-bit rotary steerable system contains no stationary components in contact with the formation. In addition, the tool can be drilled at a tangent to the formation. The tests demonstrated that the system is reliable in experimental design. tabs., figs.
RELAY ASSISTED TRANSMISSSION WITH BIT-INTERLEAVED CODED MODULATION
Institute of Scientific and Technical Information of China (English)
Meng Qingmin; You Xiaohu; John Boyer
2006-01-01
We investigate an adaptive cooperative protocol in a Two-Hop-Relay (THR) wireless system that combines the following: (1) adaptive relaying based on repetition coding; (2) single or two transmit antennas and one receive antenna configurations for all nodes, each using high order constellation; (3) Bit-Interleaved Coded Modulation (BICM). We focus on a simple decoded relaying (i.e. no error correcting at a relay node)and simple signal quality thresholds for relaying. Then the impact of the two simple thresholds on the system performance is studied. Our results suggest that compared with the traditional scheme for direct transmission,the proposed scheme can increase average throughput in high spectral efficiency region with low implementation-cost at the relay.
1 /N perturbations in superstring bit models
Thorn, Charles B.
2016-03-01
We develop the 1 /N expansion for stable string bit models, focusing on a model with bit creation operators carrying only transverse spinor indices a =1 ,…,s . At leading order (N =∞ ), this model produces a (discretized) light cone string with a "transverse space" of s Grassmann worldsheet fields. Higher orders in the 1 /N expansion are shown to be determined by the overlap of a single large closed chain (discretized string) with two smaller closed chains. In the models studied here, the overlap is not accompanied with operator insertions at the break/join point. Then, the requirement that the discretized overlap has a smooth continuum limit leads to the critical Grassmann "dimension" of s =24 . This "protostring," a Grassmann analog of the bosonic string, is unusual, because it has no large transverse dimensions. It is a string moving in one space dimension, and there are neither tachyons nor massless particles. The protostring, derived from our pure spinor string bit model, has 24 Grassmann dimensions, 16 of which could be bosonized to form 8 compactified bosonic dimensions, leaving 8 Grassmann dimensions—the worldsheet content of the superstring. If the transverse space of the protostring could be "decompactified," string bit models might provide an appealing and solid foundation for superstring theory.
Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)
2000-01-01
Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.
Directory of Open Access Journals (Sweden)
Bikramaditya Das
2010-03-01
Full Text Available For high data rate ultra wideband communication system, performance comparison of Rake, MMSE andRake-MMSE receivers is attempted in this paper. Further a detail study on Rake-MMSE time domainequalizers is carried out taking into account all the important parameters such as the effect of the numberof Rake fingers and equalizer taps on the error rate performance. This receiver combats inter-symbolinterference by taking advantages of both the Rake and equalizer structure. The bit error rateperformances are investigated using MATLAB simulation on IEEE 802.15.3a defined UWB channelmodels. Simulation results show that the bit error rate probability of Rake-MMSE receiver is much betterthan Rake receiver and MMSE equalizer. Study on non-line of sight indoor channel models illustratesthat bit error rate performance of Rake-MMSE (both LE and DFE improves for CM3 model with smallerspread compared to CM4 channel model. It is indicated that for a MMSE equalizer operating at low tomedium SNR values, the number of Rake fingers is the dominant factor to improve system performance,while at high SNR values the number of equalizer taps plays a more significant role in reducing the errorrate.
Bit-Interleaved Coded Multiple Beamforming with Constellation Precoding
Park, Hong Ju
2009-01-01
In this paper, we present the diversity order analysis of bit-interleaved coded multiple beamforming (BICMB) combined with the constellation precoding scheme. Multiple beamforming is realized by singular value decomposition of the channel matrix which is assumed to be perfectly known to the transmitter as well as the receiver. Previously, BICMB is known to have a diversity order bound related with the product of the code rate and the number of parallel subchannels, losing the full diversity order in some cases. In this paper, we show that BICMB combined with the constellation precoder and maximum likelihood detection achieves the full diversity order. We also provide simulation results that match the analysis.
Fast converging minimum probability of error neural network receivers for DS-CDMA communications.
Matyjas, John D; Psaromiligkos, Ioannis N; Batalama, Stella N; Medley, Michael J
2004-03-01
We consider a multilayer perceptron neural network (NN) receiver architecture for the recovery of the information bits of a direct-sequence code-division-multiple-access (DS-CDMA) user. We develop a fast converging adaptive training algorithm that minimizes the bit-error rate (BER) at the output of the receiver. The adaptive algorithm has three key features: i) it incorporates the BER, i.e., the ultimate performance evaluation measure, directly into the learning process, ii) it utilizes constraints that are derived from the properties of the optimum single-user decision boundary for additive white Gaussian noise (AWGN) multiple-access channels, and iii) it embeds importance sampling (IS) principles directly into the receiver optimization process. Simulation studies illustrate the BER performance of the proposed scheme.
Variable bit rate video traffic modeling by multiplicative multifractal model
Institute of Scientific and Technical Information of China (English)
Huang Xiaodong; Zhou Yuanhua; Zhang Rongfu
2006-01-01
Multiplicative multifractal process could well model video traffic. The multiplier distributions in the multiplicative multifractal model for video traffic are investigated and it is found that Gaussian is not suitable for describing the multipliers on the small time scales. A new statistical distribution-symmetric Pareto distribution is introduced. It is applied instead of Gaussian for the multipliers on those scales. Based on that, the algorithm is updated so that symmetric pareto distribution and Gaussian distribution are used to model video traffic but on different time scales. The simulation results demonstrate that the algorithm could model video traffic more accurately.
Biometric Quantization through Detection Rate Optimized Bit Allocation
Chen, C.; Veldhuis, R.N.J.; Kevenaar, T.A.M.; Akkermans, A.H.M.
2009-01-01
Extracting binary strings from real-valued biometric templates is a fundamental step in many biometric template protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Previous work has been focusing on the design of optimal quantization and coding for
Fast optical signal processing in high bit rate OTDM systems
DEFF Research Database (Denmark)
Poulsen, Henrik Nørskov; Jepsen, Kim Stokholm; Clausen, Anders;
1998-01-01
As all-optical signal processing is maturing, optical time division multiplexing (OTDM) has also gained interest for simple networking in high capacity backbone networks. As an example of a network scenario we show an OTDM bus interconnecting another OTDM bus, a single high capacity user...
High bit rate BPSK signals in shallow water environments
Robert, M.K.; Walree, P.A. van
2003-01-01
Lately, acoustic data transfer has become an important topic in underwater environments. Several acoustic communication signals e.g. spread spectrum or frequency shift keying signals have been extensively developed. However, in challenging environments, it is still difficult to obtain robust acousti
Fixed-Length Error Resilient Code and Its Application in Video Coding
Institute of Scientific and Technical Information of China (English)
FANChen; YANGMing; CUIHuijuan; TANGKun
2003-01-01
Since popular entropy coding techniques such as Variable-length code (VLC) tend to cause severe error propagation in noisy environments, an error resilient entropy coding technique named Fixed-length error resilient code (FLERC) is proposed to mitigate the problem. It is found that even for a non-stationary source, the probability of error propagation could be minimized through introducing intervals into the codeword space of the fixed-length codes. FLERC is particularly suitable for the entropy coding for video signals in error-prone environments, where a little distortion is tolerable, but severe error propagation would lead to fatal consequences. An iterative construction algorithm for FLERC is presented in this paper. In addition, FLERC is adopted instead of VLC as the entropy coder of the DCT coefficients in H.263++Data partitioning slice (DPS) mode, and tested on noisy channels. The simulation results show that this scheme outperforms the scheme of H.263++ combined with FEC when the channel noise is highly extensive, since the error propagation is effectively suppressed by using FLERC. Moreover, it is observed that the reconstructed video quality degrades gracefully as the bit error rate increases.
Single-event upset (SEU) in a DRAM with on-chip error correction
Zoutendyk, J. A.; Schwartz, H. R.; Watson, R. K.; Hasnain, Z.; Nevile, L. R.
1987-01-01
Results are given of SEU measurements on 256K dynamic RAMs with on-chip error correction. They are claimed to be the first ever reported. A (12/8) Hamming error-correcting code was incorporated in the layout. Physical separation of the bits in each code word was used to guard against multiple bits being disrupted in any given word. Significant reduction in observed errors is reported.
High Data Rate Quantum Cryptography
Kwiat, Paul; Christensen, Bradley; McCusker, Kevin; Kumor, Daniel; Gauthier, Daniel
2015-05-01
While quantum key distribution (QKD) systems are now commercially available, the data rate is a limiting factor for some desired applications (e.g., secure video transmission). Most QKD systems receive at most a single random bit per detection event, causing the data rate to be limited by the saturation of the single-photon detectors. Recent experiments have begun to explore using larger degree of freedoms, i.e., temporal or spatial qubits, to optimize the data rate. Here, we continue this exploration using entanglement in multiple degrees of freedom. That is, we use simultaneous temporal and polarization entanglement to reach up to 8.3 bits of randomness per coincident detection. Due to current technology, we are unable to fully secure the temporal degree of freedom against all possible future attacks; however, by assuming a technologically-limited eavesdropper, we are able to obtain 23.4 MB/s secure key rate across an optical table, after error reconciliation and privacy amplification. In this talk, we will describe our high-rate QKD experiment, with a short discussion on our work towards extending this system to ship-to-ship and ship-to-shore communication, aiming to secure the temporal degree of freedom and to implement a 30-km free-space link over a marine environment.
A 16-bit cascaded sigma-delta pipeline A/D converter
Institute of Scientific and Technical Information of China (English)
Li Liang; Li Ruzhang; Yu Zhou; Zhang Jiabin; Zhang Jun'an
2009-01-01
A low-noise cascaded multi-bit sigma-delta pipeline analog-to-digital converter (ADC) with a low oversampling rate is presented. The architecture is composed of a 2-order 5-bit sigma-delta modulator and a cascaded 4-stage 12-bit pipelined ADC, and operates at a low 8X oversampling rate, The static and dynamic performances of the whole ADC can be improved by using dynamic element matching technique. The ADC operates at a 4 MHz clock rate and dissipates 300 mW at a 5 V/3 V analog/digital power supply. It is developed in a 0.35 μm CMOS process and achieves an SNR of 82 dB.
Efficient bit sifting scheme of post-processing in quantum key distribution
Li, Qiong; Le, Dan; Wu, Xianyan; Niu, Xiamu; Guo, Hong
2015-10-01
Bit sifting is an important step in the post-processing of quantum key distribution (QKD). Its function is to sift out the undetected original keys. The communication traffic of bit sifting has essential impact on the net secure key rate of a practical QKD system. In this paper, an efficient bit sifting scheme is presented, of which the core is a lossless source coding algorithm. Both theoretical analysis and experimental results demonstrate that the performance of the scheme is approaching the Shannon limit. The proposed scheme can greatly decrease the communication traffic of the post-processing of a QKD system, which means the proposed scheme can decrease the secure key consumption for classical channel authentication and increase the net secure key rate of the QKD system, as demonstrated by analyzing the improvement on the net secure key rate. Meanwhile, some recommendations on the application of the proposed scheme to some representative practical QKD systems are also provided.
Autonomously stabilized entanglement between two superconducting quantum bits.
Shankar, S; Hatridge, M; Leghtas, Z; Sliwa, K M; Narla, A; Vool, U; Girvin, S M; Frunzio, L; Mirrahimi, M; Devoret, M H
2013-12-19
Quantum error correction codes are designed to protect an arbitrary state of a multi-qubit register from decoherence-induced errors, but their implementation is an outstanding challenge in the development of large-scale quantum computers. The first step is to stabilize a non-equilibrium state of a simple quantum system, such as a quantum bit (qubit) or a cavity mode, in the presence of decoherence. This has recently been accomplished using measurement-based feedback schemes. The next step is to prepare and stabilize a state of a composite system. Here we demonstrate the stabilization of an entangled Bell state of a quantum register of two superconducting qubits for an arbitrary time. Our result is achieved using an autonomous feedback scheme that combines continuous drives along with a specifically engineered coupling between the two-qubit register and a dissipative reservoir. Similar autonomous feedback techniques have been used for qubit reset, single-qubit state stabilization, and the creation and stabilization of states of multipartite quantum systems. Unlike conventional, measurement-based schemes, the autonomous approach uses engineered dissipation to counteract decoherence, obviating the need for a complicated external feedback loop to correct errors. Instead, the feedback loop is built into the Hamiltonian such that the steady state of the system in the presence of drives and dissipation is a Bell state, an essential building block for quantum information processing. Such autonomous schemes, which are broadly applicable to a variety of physical systems, as demonstrated by the accompanying paper on trapped ion qubits, will be an essential tool for the implementation of quantum error correction.
Quantum error-correcting codes need not completely reveal the error syndrome
Shor, P W; Shor, Peter W; Smolin, John A
1996-01-01
Quantum error-correcting codes so far proposed have not been able to work in the presence of noise levels which introduce greater than one bit of entropy per qubit sent through the quantum channel. This has been because all such codes either find the complete error syndrome of the noise or trivially map onto such codes. We describe a code which does not find complete information on the noise and can be used for reliable transmission of quantum information through channels which introduce more than one bit of entropy per transmitted bit. In the case of the depolarizing ``Werner'' channel our code can be used in a channel of fidelity .8096 while the best existing code worked only down to .8107.
Forward Error Correcting Codes for 100 Gbit/s Optical Communication Systems
DEFF Research Database (Denmark)
Li, Bomin
This PhD thesis addresses the design and application of forward error correction (FEC) in high speed optical communications at the speed of 100 Gb/s and beyond. With the ever growing internet traffic, FEC has been considered as a strong and cost-effective way to improve the quality of transmission......-complexity low-power-consumption FEC hardware implementation plays an important role in the next generation energy efficient networks. Thirdly, a joint research is required for FEC integrated applications as the error distribution in channels relies on many factors such as non-linearity in long distance optical...... fiber links, cross-talks in wavelength division multiplexing (WDM) setups and so on. FEC with a product code structure has been investigated theoretically and experimentally. The iterative decoding method applied to FEC codes in a product code structure can effectively reduce the bit error rate (BER...
On the BER and capacity analysis of MIMO MRC systems with channel estimation error
Yang, Liang
2011-10-01
In this paper, we investigate the effect of channel estimation error on the capacity and bit-error rate (BER) of a multiple-input multiple-output (MIMO) transmit maximal ratio transmission (MRT) and receive maximal ratio combining (MRC) systems over uncorrelated Rayleigh fading channels. We first derive the ergodic (average) capacity expressions for such systems when power adaptation is applied at the transmitter. The exact capacity expression for the uniform power allocation case is also presented. Furthermore, to investigate the diversity order of MIMO MRT-MRC scheme, we derive the BER performance under a uniform power allocation policy. We also present an asymptotic BER performance analysis for the MIMO MRT-MRC system with multiuser diversity. The numerical results are given to illustrate the sensitivity of the main performance to the channel estimation error and the tightness of the approximate cutoff value. © 2011 IEEE.
Global Networks of Trade and Bits
Riccaboni, Massimo; Schiavo, Stefano
2012-01-01
Considerable efforts have been made in recent years to produce detailed topologies of the Internet. Although Internet topology data have been brought to the attention of a wide and somewhat diverse audience of scholars, so far they have been overlooked by economists. In this paper, we suggest that such data could be effectively treated as a proxy to characterize the size of the "digital economy" at country level and outsourcing: thus, we analyse the topological structure of the network of trade in digital services (trade in bits) and compare it with that of the more traditional flow of manufactured goods across countries. To perform meaningful comparisons across networks with different characteristics, we define a stochastic benchmark for the number of connections among each country-pair, based on hypergeometric distribution. Original data are thus filtered by means of different thresholds, so that we only focus on the strongest links, i.e., statistically significant links. We find that trade in bits displays...
HIGH-POWER TURBODRILL AND DRILL BIT FOR DRILLING WITH COILED TUBING
Energy Technology Data Exchange (ETDEWEB)
Robert Radtke; David Glowka; Man Mohan Rai; David Conroy; Tim Beaton; Rocky Seale; Joseph Hanna; Smith Neyrfor; Homer Robertson
2008-03-31
Commercial introduction of Microhole Technology to the gas and oil drilling industry requires an effective downhole drive mechanism which operates efficiently at relatively high RPM and low bit weight for delivering efficient power to the special high RPM drill bit for ensuring both high penetration rate and long bit life. This project entails developing and testing a more efficient 2-7/8 in. diameter Turbodrill and a novel 4-1/8 in. diameter drill bit for drilling with coiled tubing. The high-power Turbodrill were developed to deliver efficient power, and the more durable drill bit employed high-temperature cutters that can more effectively drill hard and abrasive rock. This project teams Schlumberger Smith Neyrfor and Smith Bits, and NASA AMES Research Center with Technology International, Inc (TII), to deliver a downhole, hydraulically-driven power unit, matched with a custom drill bit designed to drill 4-1/8 in. boreholes with a purpose-built coiled tubing rig. The U.S. Department of Energy National Energy Technology Laboratory has funded Technology International Inc. Houston, Texas to develop a higher power Turbodrill and drill bit for use in drilling with a coiled tubing unit. This project entails developing and testing an effective downhole drive mechanism and a novel drill bit for drilling 'microholes' with coiled tubing. The new higher power Turbodrill is shorter, delivers power more efficiently, operates at relatively high revolutions per minute, and requires low weight on bit. The more durable thermally stable diamond drill bit employs high-temperature TSP (thermally stable) diamond cutters that can more effectively drill hard and abrasive rock. Expectations are that widespread adoption of microhole technology could spawn a wave of 'infill development' drilling of wells spaced between existing wells, which could tap potentially billions of barrels of bypassed oil at shallow depths in mature producing areas. At the same time, microhole
2-bit Flip Mutation Elementary Fitness Landscapes
Langdon, William
2010-01-01
Genetic Programming parity is not elementary. GP parity cannot be represented as the sum of a small number of elementary landscapes. Statistics, including fitness distance correlation, of Parity's fitness landscape are calculated. Using Walsh analysis the eigen values and eigenvectors of the Laplacian of the two bit flip fitness landscape are given and a ruggedness measure for elementary landscapes is proposed. An elementary needle in a haystack (NIH) landscape is g...
Blind One-Bit Compressive Sampling
2013-01-17
notation and recalling some background from convex analysis . For the d-dimensional Euclidean space Rd, the class of all lower semicontinuous convex...compressed sensing, Applied and Computational Harmonic Analysis , 27 (2009), pp. 265 – 274. [3] P. T. Boufounos and R. G. Baraniuk, 1-bit compressive sensing...Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0
Jianghan PDC Bits Open Good Market in Singapore
Institute of Scientific and Technical Information of China (English)
Wang Tongliang
1995-01-01
@@ The PDC bits produced by PDC Division,Jianghan Drill Bit Plant won good reputation,because of its good quality and appearance in 94 South-east Asia Offshore Petroleum Engineering Product Exhibition held by Singapore International Exhibition Center.
Acquisition and Retaining Granular Samples via a Rotating Coring Bit
Bar-Cohen, Yoseph; Badescu, Mircea; Sherrit, Stewart
2013-01-01
This device takes advantage of the centrifugal forces that are generated when a coring bit is rotated, and a granular sample is entered into the bit while it is spinning, making it adhere to the internal wall of the bit, where it compacts itself into the wall of the bit. The bit can be specially designed to increase the effectiveness of regolith capturing while turning and penetrating the subsurface. The bit teeth can be oriented such that they direct the regolith toward the bit axis during the rotation of the bit. The bit can be designed with an internal flute that directs the regolith upward inside the bit. The use of both the teeth and flute can be implemented in the same bit. The bit can also be designed with an internal spiral into which the various particles wedge. In another implementation, the bit can be designed to collect regolith primarily from a specific depth. For that implementation, the bit can be designed such that when turning one way, the teeth guide the regolith outward of the bit and when turning in the opposite direction, the teeth will guide the regolith inward into the bit internal section. This mechanism can be implemented with or without an internal flute. The device is based on the use of a spinning coring bit (hollow interior) as a means of retaining granular sample, and the acquisition is done by inserting the bit into the subsurface of a regolith, soil, or powder. To demonstrate the concept, a commercial drill and a coring bit were used. The bit was turned and inserted into the soil that was contained in a bucket. While spinning the bit (at speeds of 600 to 700 RPM), the drill was lifted and the soil was retained inside the bit. To prove this point, the drill was turned horizontally, and the acquired soil was still inside the bit. The basic theory behind the process of retaining unconsolidated mass that can be acquired by the centrifugal forces of the bit is determined by noting that in order to stay inside the interior of the bit, the
1/N Perturbations in Superstring Bit Models
Thorn, Charles B
2015-01-01
We develop the 1/N expansion for stable string bit models, focusing on a model with bit creation operators carrying only transverse spinor indices a=1,...,s. At leading order (1/N=0), this model produces a (discretized) lightcone string with a "transverse space' of $s$ Grassmann worldsheet fields. Higher orders in the 1/N expansion are shown to be determined by the overlap of a single large closed chain (discretized string) with two smaller closed chains. In the models studied here, the overlap is not accompanied with operator insertions at the break/join point. Then the requirement that the discretized overlap have a smooth continuum limit leads to the critical Grassmann "dimension" of s=24. This "protostring", a Grassmann analog of the bosonic string, is unusual, because it has no large transverse dimensions. It is a string moving in one space dimension and there are neither tachyons nor massless particles. The protostring, derived from our pure spinor string bit model, has 24 Grassmann dimensions, 16 of wh...
NSC 800, 8-bit CMOS microprocessor
Suszko, S. F.
1984-01-01
The NSC 800 is an 8-bit CMOS microprocessor manufactured by National Semiconductor Corp., Santa Clara, California. The 8-bit microprocessor chip with 40-pad pin-terminals has eight address buffers (A8-A15), eight data address -- I/O buffers (AD(sub 0)-AD(sub 7)), six interrupt controls and sixteen timing controls with a chip clock generator and an 8-bit dynamic RAM refresh circuit. The 22 internal registers have the capability of addressing 64K bytes of memory and 256 I/O devices. The chip is fabricated on N-type (100) silicon using self-aligned polysilicon gates and local oxidation process technology. The chip interconnect consists of four levels: Aluminum, Polysi 2, Polysi 1, and P(+) and N(+) diffusions. The four levels, except for contact interface, are isolated by interlevel oxide. The chip is packaged in a 40-pin dual-in-line (DIP), side brazed, hermetically sealed, ceramic package with a metal lid. The operating voltage for the device is 5 V. It is available in three operating temperature ranges: 0 to +70 C, -40 to +85 C, and -55 to +125 C. Two devices were submitted for product evaluation by F. Stott, MTS, JPL Microprocessor Specialist. The devices were pencil-marked and photographed for identification.
Verilog Implementation of 32-Bit CISC Processor
Directory of Open Access Journals (Sweden)
P.Kanaka Sirisha
2016-04-01
Full Text Available The Project deals with the design of the 32-Bit CISC Processor and modeling of its components using Verilog language. The Entire Processor uses 32-Bit bus to deal with all the registers and the memories. This Processor implements various arithmetic, logical, Data Transfer operations etc., using variable length instructions, which is the core property of the CISC Architecture. The Processor also supports various addressing modes to perform a 32-Bit instruction. Our Processor uses Harvard Architecture (i.e., to have a separate program and data memory and hence has different buses to negotiate with the Program Memory and Data Memory individually. This feature enhances the speed of our processor. Hence it has two different Program Counters to point to the memory locations of the Program Memory and Data Memory.Our processor has ‘Instruction Queuing’ which enables it to save the time needed to fetch the instruction and hence increases the speed of operation. ‘Interrupt Service Routine’ is provided in our Processor to make it address the Interrupts.
Reversible n-Bit to n-Bit Integer Haar-Like Transforms
Energy Technology Data Exchange (ETDEWEB)
Senecal, J; Duchaineau, M; Joy, K I
2003-11-03
We introduce a wavelet-like transform similar to the Haar transform, but with the properties that it packs the results into the same number of bits as the original data, and is reversible. Our method, called TLHaar, uses table lookups to replace the averaging, differencing, and bit shifting performed in a Haar IntegerWavelet Transform (IWT). TLHaar maintains the same coefficient magnitude relationships for the low- and high-pass coefficients as true Haar, but reorders them to fit into the same number of bits as the input signal, thus eliminating the sign bit that is added to the Haar IWT output coefficients. Eliminating the sign bit avoids using extra memory and speeds the transform process. We tested TLHaar on a variety of image types, and when compared to the Haar IWT TLHaar is significantly faster. For image data with lines or hard edges TLHaar coefficients compress better than those of the Haar IWT. Due to its speed TLHaar is suitable for streaming hardware implementations with fixed data sizes, such as DVI channels.
High Reproduction Rate versus Sexual Fidelity
Sousa, A. O.; de Oliveira, S. Moss
2000-01-01
We introduce fidelity into the bit-string Penna model for biological ageing and study the advantage of this fidelity when it produces a higher survival probability of the offspring due to paternal care. We attribute a lower reproduction rate to the faithful males but a higher death probability to the offspring of non-faithful males that abandon the pups to mate other females. The fidelity is considered as a genetic trait which is transmitted to the male offspring (with or without error). We s...
Progressive and Error-Resilient Transmission Strategies for VLC Encoded Signals over Noisy Channels
Directory of Open Access Journals (Sweden)
Guillemot Christine
2006-01-01
Full Text Available This paper addresses the issue of robust and progressive transmission of signals (e.g., images, video encoded with variable length codes (VLCs over error-prone channels. This paper first describes bitstream construction methods offering good properties in terms of error resilience and progressivity. In contrast with related algorithms described in the literature, all proposed methods have a linear complexity as the sequence length increases. The applicability of soft-input soft-output (SISO and turbo decoding principles to resulting bitstream structures is investigated. In addition to error resilience, the amenability of the bitstream construction methods to progressive decoding is considered. The problem of code design for achieving good performance in terms of error resilience and progressive decoding with these transmission strategies is then addressed. The VLC code has to be such that the symbol energy is mainly concentrated on the first bits of the symbol representation (i.e., on the first transitions of the corresponding codetree. Simulation results reveal high performance in terms of symbol error rate (SER and mean-square reconstruction error (MSE. These error-resilience and progressivity properties are obtained without any penalty in compression efficiency. Codes with such properties are of strong interest for the binarization of -ary sources in state-of-the-art image, and video coding systems making use of, for example, the EBCOT or CABAC algorithms. A prior statistical analysis of the signal allows the construction of the appropriate binarization code.
A novel bit-quad-based Euler number computing algorithm
Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Xiao ZHAO
2015-01-01
The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results ...
A novel bit-quad-based Euler number computing algorithm.
Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao
2015-01-01
The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.
... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...
System Measures Errors Between Time-Code Signals
Cree, David; Venkatesh, C. N.
1993-01-01
System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.
Nichols, Ellert R; Shadabi, Elnaz; Craig, Douglas B
2009-06-01
The role of translation error for Escherichia coli individual beta-galactosidase molecule catalytic and electrophoretic heterogeneity was investigated using CE-LIF. An E. coli rpsL mutant with a hyperaccurate translation phenotype produced enzyme molecules that exhibited significantly less catalytic heterogeneity but no reduction of electrophoretic heterogeneity. Enzyme expressed with streptomycin-induced translation error had increased thermolability, lower activity, and no significant change to catalytic or electrophoretic heterogeneity. Modeling of the electrophoretic behaviour of beta-galactosidase suggested that variation of the hydrodynamic radius may be the most significant contributor to electrophoretic heterogeneity.