Framed bit error rate testing for 100G ethernet equipment
DEFF Research Database (Denmark)
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert;
2010-01-01
of performing bit error rate testing at 100Gbps. In particular, we show how Bit Error Rate Testing (BERT) can be performed over an aggregated 100G Attachment Unit Interface (CAUI) by encapsulating the test data in Ethernet frames at line speed. Our results show that framed bit error rate testing can...
Modeling of Bit Error Rate in Cascaded 2R Regenerators
DEFF Research Database (Denmark)
Öhman, Filip; Mørk, Jesper
2006-01-01
This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments and the...
Approximate Minimum Bit Error Rate Equalization for Fading Channels
Directory of Open Access Journals (Sweden)
Levendovszky Janos
2010-01-01
Full Text Available A novel channel equalizer algorithm is introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithm is based on minimizing the bit error rate (BER using a fast approximation of its gradient with respect to the equalizer coefficients. This approximation is obtained by estimating the exponential summation in the gradient with only some carefully chosen dominant terms. The paper derives an algorithm to calculate these dominant terms in real-time. Summing only these dominant terms provides a highly accurate approximation of the true gradient. Combined with a fast adaptive channel state estimator, the new equalization algorithm yields better performance than the traditional zero forcing (ZF or minimum mean square error (MMSE equalizers. The performance of the new method is tested by simulations performed on standard wireless channels. From the performance analysis one can infer that the new equalizer is capable of efficient channel equalization and maintaining a relatively low bit error probability in the case of channels corrupted by frequency selectivity. Hence, the new algorithm can contribute to ensuring QoS communication over highly distorted channels.
Improving bit error rate through multipath differential demodulation
Lize, Yannick Keith; Christen, Louis; Nuccio, Scott; Willner, Alan E.; Kashyap, Raman
2007-02-01
Differential phase shift keyed transmission (DPSK) is currently under serious consideration as a deployable datamodulation format for high-capacity optical communication systems due mainly to its 3 dB OSNR advantage over intensity modulation. However DPSK OSNR requirements are still 3 dB higher than its coherent counter part, PSK. Some strategies have been proposed to reduce this penalty through multichip soft detection but the improvement is limited to 0.3dB at BER 10-3. Better performance is expected from other soft-detection schemes using feedback control but the implementation is not straight forward. We present here an optical multipath error correction technique for differentially encoded modulation formats such as differential-phase-shift-keying (DPSK) and differential polarization shift keying (DPolSK) for fiber-based and free-space communication. This multipath error correction method combines optical and electronic logic gates. The scheme can easily be implemented using commercially available interferometers and high speed logic gates and does not require any data overhead therefore does not affect the effective bandwidth of the transmitted data. It is not merely compatible but also complementary to error correction codes commonly used in optical transmission systems such as forward-error-correction (FEC). The technique consists of separating the demodulation at the receiver in multiple paths. Each path consists of a Mach-Zehnder interferometer with an integer bit delay and a different delay is used in each path. Some basic logical operations follow and the three paths are compared using a simple majority vote algorithm. Receiver sensitivity is improved by 0.35 dB in simulations and 1.5 dB experimentally at BER of 10-3.
Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding
Directory of Open Access Journals (Sweden)
Haider M. AlSabbagh
2012-03-01
Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.
Analytical expression for the bit error rate of cascaded all-optical regenerators
DEFF Research Database (Denmark)
Mørk, Jesper; Öhman, Filip; Bischoff, S.
2003-01-01
We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed....
Optimal GSTDN/TDRSS bit error rate evaluation using limited sample sizes
Coffey, R. E.; Lawrence, G. M.; Stuart, J. R.
1982-01-01
Statistical studies of telemetry errors were made on data from the Solar Mesosphere Explorer (SME). Examination of frame sync words, as received at the ground station, indicated a wide spread of Bit Error Rates (BER) among stations. A study of the distribution of errors per station pass, however, showed that there was a tendency for the station software to add an even number of spurious errors to the count. A count of wild points in science data, rejecting drop-outs and other system errors, yielded an average random BER of 3.1 x 10 to the -6 with 99% confidence limits of 2.6 and 3.8 x 10 to the -6. The system errors are typically 5 to 100 times more frequent than the truly random errors.
Novel Relations between the Ergodic Capacity and the Average Bit Error Rate
Yilmaz, Ferkan
2012-01-01
Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their...
Novel relations between the ergodic capacity and the average bit error rate
Yilmaz, Ferkan
2011-11-01
Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.
HTS basic RSFQ cells for an optimal bit-error rate
International Nuclear Information System (INIS)
Thermal noise strongly influences the operation of RSFQ (rapid single flux quantum) logic circuits made of high-temperature superconductors (HTS). In the past, the circuit design was based on fabrication yield optimization. A new theoretical study using a method of general determination of the digital bit-error rate (BER) gives hope to develop such devices with a large immunity against noise. With regard to this study the design parameters of a circuit optimized with respect to fabrication yield are far from its minimum bit-error rate. Only for temperatures close to 4 K the parameters determined for fabrication yield match the parameters obtained with BER optimization. For this reason, a new reliable technology for fabrication of HTS circuits is required. We have developed a new fabrication process to serve as a basis for a proof of this new design approach. We have calculated the bit-error rate of a newly designed RSFQ chip with realistic values derived from a test chip which has been fabricated with this new multilayer technology. The new technology contains three superconductor thin films. An inductance smaller than 0.5 pH per square has been reached by using a ground plane. (author)
A minimum bit error-rate detector for amplify and forward relaying systems
Ahmed, Qasim Zeeshan
2012-05-01
In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.
Ahmed, Qasim Zeeshan
2014-04-01
The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol. © 2002-2012 IEEE.
Symbol and Bit Error Rates Analysis of Hybrid PIM-CDMA
Directory of Open Access Journals (Sweden)
Ghassemlooy Z.
2005-01-01
Full Text Available A hybrid pulse interval modulation code-division multiple-access (hPIM-CDMA scheme employing the strict optical orthogonal code (SOCC with unity and auto- and cross-correlation constraints for indoor optical wireless communications is proposed. In this paper, we analyse the symbol error rate (SER and bit error rate (BER of hPIM-CDMA. In the analysis, we consider multiple access interference (MAI, self-interference, and the hybrid nature of the hPIM-CDMA signal detection, which is based on the matched filter (MF. It is shown that the BER/SER performance can only be evaluated if the bit resolution M conforms to the condition set by the number of consecutive false alarm pulses that might occur and be detected, so that one symbol being divided into two is unlikely to occur. Otherwise, the probability of SER and BER becomes extremely high and indeterminable. We show that for a large number of users, the BER improves when increasing the code weight w . The results presented are compared with other modulation schemes.
Symbol and Bit Error Rates Analysis of Hybrid PIM-CDMA
Directory of Open Access Journals (Sweden)
Ghassemlooy Z
2005-01-01
Full Text Available A hybrid pulse interval modulation code-division multiple-access (hPIM-CDMA scheme employing the strict optical orthogonal code (SOCC with unity and auto- and cross-correlation constraints for indoor optical wireless communications is proposed. In this paper, we analyse the symbol error rate (SER and bit error rate (BER of hPIM-CDMA. In the analysis, we consider multiple access interference (MAI, self-interference, and the hybrid nature of the hPIM-CDMA signal detection, which is based on the matched filter (MF. It is shown that the BER/SER performance can only be evaluated if the bit resolution conforms to the condition set by the number of consecutive false alarm pulses that might occur and be detected, so that one symbol being divided into two is unlikely to occur. Otherwise, the probability of SER and BER becomes extremely high and indeterminable. We show that for a large number of users, the BER improves when increasing the code weight . The results presented are compared with other modulation schemes.
Influence of wave-front aberrations on bit error rate in inter-satellite laser communications
Yang, Yuqiang; Han, Qiqi; Tan, Liying; Ma, Jing; Yu, Siyuan; Yan, Zhibin; Yu, Jianjie; Zhao, Sheng
2011-06-01
We derive the bit error rate (BER) of inter-satellite laser communication (lasercom) links with on-off-keying systems in the presence of both wave-front aberrations and pointing error, but without considering the noise of the detector. Wave-front aberrations induced by receiver terminal have no influence on the BER, while wave-front aberrations induced by transmitter terminal will increase the BER. The BER depends on the area S which is truncated out by the threshold intensity of the detector (such as APD) on the intensity function in the receiver plane, and changes with root mean square (RMS) of wave-front aberrations. Numerical results show that the BER rises with the increasing of RMS value. The influences of Astigmatism, Coma, Curvature and Spherical aberration on the BER are compared. This work can benefit the design of lasercom system.
Bit Error Rate Analysis for MC-CDMA Systems in Nakagami-m Fading Channels
Directory of Open Access Journals (Sweden)
Zexian Li
2004-09-01
Full Text Available Multicarrier code division multiple access (MC-CDMA is a promising technique that combines orthogonal frequency division multiplexing (OFDM with CDMA. In this paper, based on an alternative expression for the Q-function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER of multiuser MC-CDMA systems in frequency-selective Nakagami-m fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC or equal gain combining (EGC. The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.
Masud, M A; Rahman, M A
2010-01-01
In the beginning of 21st century there has been a dramatic shift in the market dynamics of telecommunication services. The transmission from base station to mobile or downlink transmission using M-ary Quadrature Amplitude modulation (QAM) and Quadrature phase shift keying (QPSK) modulation schemes are considered in Wideband-Code Division Multiple Access (W-CDMA) system. We have done the performance analysis of these modulation techniques when the system is subjected to Additive White Gaussian Noise (AWGN) and multipath Rayleigh fading are considered in the channel. The research has been performed by using MATLAB 7.6 for simulation and evaluation of Bit Error Rate (BER) and Signal-To-Noise Ratio (SNR) for W-CDMA system models. It is shows that the analysis of Quadrature phases shift key and 16-ary Quadrature Amplitude modulations which are being used in wideband code division multiple access system, Therefore, the system could go for more suitable modulation technique to suit the channel quality, thus we can d...
Directory of Open Access Journals (Sweden)
Claude D'Amours
2011-01-01
Full Text Available We analytically derive the upper bound for the bit error rate (BER performance of a single user multiple input multiple output code division multiple access (MIMO-CDMA system employing parity-bit-selected spreading in slowly varying, flat Rayleigh fading. The analysis is done for spatially uncorrelated links. The analysis presented demonstrates that parity-bit-selected spreading provides an asymptotic gain of 10log(Nt dB over conventional MIMO-CDMA when the receiver has perfect channel estimates. This analytical result concurs with previous works where the (BER is determined by simulation methods and provides insight into why the different techniques provide improvement over conventional MIMO-CDMA systems.
Ahmed, Qasim Zeeshan
2013-01-01
In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.
Cox, Christina B.; Coney, Thom A.
1999-01-01
The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.
Kim, Do-Hyung; Cho, Janghyun; Moon, Hyungbae; Jeon, Sungbin; Park, No-Cheol; Yang, Hyunseok; Park, Kyoung-Su; Park, Young-Pil
2013-09-01
Optimized image restoration is suggested in angular-multiplexing-page-based holographic data storage. To improve the bit error rate (BER), an extended high frequency enhancement filter is recalculated from the point spread function (PSF) and Gaussian mask as the image restoration filter. Using the extended image restoration filter, the proposed system reduces the number of processing steps compared with the image upscaling method and provides better performance in BER and SNR. Numerical simulations and experiments were performed to verify the proposed method. The proposed system exhibited a marked improvement in BER from 0.02 to 0.002 for a Nyquist factor of 1.1, and from 0.006 to 0 for a Nyquist factor of 1.2. Moreover, more than 3 times faster performance in calculation time was achieved compared with image restoration with PSF upscaling owing to the reductions in the number of system process and calculation load.
Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C
2016-01-01
We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287
Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.
2016-06-01
We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.
Nazrul Islam, A. K. M.; Majumder, S. P.
2015-06-01
Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.
Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link
Directory of Open Access Journals (Sweden)
Matteo Berioli
2007-05-01
Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.
Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link
Directory of Open Access Journals (Sweden)
Berioli Matteo
2007-01-01
Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.
Li, Mi; Li, Bowen; Zhang, Xuping; Song, Yuejiang; Liu, Jia; Tu, Guojie
2015-08-01
Space optical communication technique is attracting increasingly more attention because it owns advantages such as high security and great communication quality compared with microwave communication. As the space optical communication develops, people have already achieved the communication at data rate of Gb/s currently. The next generation for space optical system have goal of the higher data rate of 40Gb/s. However, the traditional optical communication system cannot satisfy it when the data rate of system is at such high extent. This paper will introduce ground optical communication system of 40Gb/s data rate as to achieve the space optical communication at high data rate. Speaking of the data rate of 40Gb/s, we must apply waveguide modulator to modulate the optical signal and magnify this signal by laser amplifier. Moreover, the more sensitive avalanche photodiode (APD) will be as the detector to increase the communication quality. Based on communication system above, we analyze character of communication quality in downlink of space optical communication system when data rate is at the level of 40Gb/s. The bit error rate (BER) performance, an important factor to justify communication quality, versus some parameter ratios is discussed. From results, there exists optimum ratio of gain factor and divergence angle, which shows the best BER performance. We can also increase ratio of receiving diameter and divergence angle for better communication quality. These results can be helpful to comprehend the character of optical communication system at high data rate and contribute to the system design.
Directory of Open Access Journals (Sweden)
James Osuru Mark
2011-01-01
Full Text Available The multicarrier code division multiple access (MC-CDMA system has received a considerable attention from researchers owing to its great potential in achieving high data rates transmission in wireless communications. Due to the detrimental effects of multipath fading the performance of the system degrades. Similarly, the impact of non-orthogonality of spreading codes can exist and cause interference. This paper addresses the performance of multicarrier code division multiple access system under the influence of frequency selective generalized η-µ fading channel and multiple access interference caused by other active users to the desired one. We apply Gaussian approximation technique to analyse the performance of the system. The avearge bit error rate is derived and expressed in Gauss hypergeometic functions. Maximal ratio combining diversity technique is utilized to alleviate the deleterious effect of multipath fading. We observed that the system performance improves when the parameter η increase or decreasse in format 1 or format 2 conditions respectively.
Bit error rate analysis of Wi-Fi and bluetooth under the interference of 2.45 GHz RFID
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
IEEE 802.11b WLAN (Wi-Fi) and IEEE 802.15.1 WPAN (bluetooth) are prevalent nowadays, and radio frequency identification (RFID) is an emerging technology which has wider applications. 802.11b occupies unlicensed industrial, scientific and medical (ISM) band (2.4-2.483 5 GHz) and uses direct sequence spread spectrum (DSSS) to alleviate the narrow band interference and fading. Bluetooth is also one user of ISM band and adopts frequency hopping spread spectrum (FHSS) to avoid the mutual interference. RFID can operate on multiple frequency bands, such as 135 KHz, 13.56 MHz and 2.45 GHz. When 2.45 GHz RFID device, which uses FHSS, collocates with 802.11b or bluetooth, the mutual interference is inevitable. Although DSSS and FHSS are applied to mitigate the interference, their performance degradation may be very significant. Therefore, in this article, the impact of 2.45 GHz RFID on 802.11b and bluetooth is investigated. Bit error rate (BER) of 802.11b and bluetooth are analyzed by establishing a mathematical model, and the simula-tion results are compared with the theoretical analysis to justify this mathematical model.
Chen, Yu-Ta; Ou-Yang, Mang; Lee, Cheng-Chung
2012-06-01
Although widely recognized as a promising candidate for the next generation of data storage devices, holographic data storage systems (HDSS) incur adverse effects such as noise, misalignment, and aberration. Therefore, based on the structural similarity (SSIM) concept, this work presents a more accurate locating approach than the gray level weighting method (GLWM). Three case studies demonstrate the effectiveness of the proposed approach. Case 1 focuses on achieving a high performance of a Fourier lens in HDSS, Cases 2 and 3 replace the Fourier lens with a normal lens to decrease the quality of the HDSS, and Case 3 demonstrates the feasibility of a defocus system in the worst-case scenario. Moreover, the bit error rate (BER) is evaluated in several average matrices extended from the located position. Experimental results demonstrate that the proposed SSIM method renders a more accurate centering and a lower BER, lower BER of 2 dB than those of the GLWM in Cases 1 and 2, and BER of 1.5 dB in Case 3. PMID:22695607
Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo
2016-07-01
We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.
Das, Bhargab; Joseph, Joby; Singh, Kehar
2007-08-01
One of the methods for smoothing the high intensity dc peak in the Fourier spectrum for reducing the reconstruction error in a Fourier transform volume holographic data storage system is to record holograms some distance away from or in front of the Fourier plane. We present the results of our investigation on the performance of such a defocused holographic data storage system in terms of bit-error rate and content search capability. We have evaluated the relevant recording geometry through numerical simulation, by obtaining the intensity distribution at the output detector plane. This has been done by studying the bit-error rate and the content search capability as a function of the aperture size and position of the recording material away from the Fourier plane. PMID:17676163
DEFF Research Database (Denmark)
Gibbon, Timothy Braidwood; Yu, Xianbin; Tafur Monroy, Idelfonso
We propose the novel generation of photonic ultra-wideband signals using an uncooled DFB laser. For the first time we experimentally demonstrate bit-for-bit DSP BER measurements for transmission of a 781.25 Mbit/s photonic UWB signal.......We propose the novel generation of photonic ultra-wideband signals using an uncooled DFB laser. For the first time we experimentally demonstrate bit-for-bit DSP BER measurements for transmission of a 781.25 Mbit/s photonic UWB signal....
Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang
2015-07-01
Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.
Directory of Open Access Journals (Sweden)
Sulyman Ahmed Iyanda
2005-01-01
Full Text Available The severity of fading on mobile communication channels calls for the combining of multiple diversity sources to achieve acceptable error rate performance. Traditional approaches perform the combining of the different diversity sources using either the conventional selective diversity combining (CSC, equal-gain combining (EGC, or maximal-ratio combining (MRC schemes. CSC and MRC are the two extremes of compromise between performance quality and complexity. Some researches have proposed a generalized selection combining scheme (GSC that combines the best M branches out of the L available diversity resources (M ≤ L . In this paper, we analyze a generalized selection combining scheme based on a threshold criterion rather than a fixed-size subset of the best channels. In this scheme, only those diversity branches whose energy levels are above a specified threshold are combined. Closed-form analytical solutions for the BER performances of this scheme over Nakagami fading channels are derived. We also discuss the merits of this scheme over GSC.
Directory of Open Access Journals (Sweden)
Ibrahim A.Z. Qatawneh
2005-01-01
Full Text Available Digital communications systems use Multi tone Channel (MC transmission techniques with differentially encoded and differentially coherent demodulation. Today there are two principle MC application, one is for the high speed digital subscriber loop and the other is for the broadcasting of digital audio and video signals. In this study the comparison of multi carriers with OQPSK and Offset 16 QAM for high-bit rate wireless applications are considered. The comparison of Bit Error Rate (BER performance of Multi tone Channel (MC with offset quadrature amplitude modulation (Offset 16 QAM and offset quadrature phase shift keying modulation (OQPSK with guard interval in a fading environment is considered via the use of Monte Carlo simulation methods. BER results are presented for Offset 16 QAM using guard interval to immune the multi path delay for frequency Rayleigh fading channels and for two-path fading channels in the presence of Additive White Gaussian Noise (AWGN. The BER results are presented for Multi tone Channel (MC with differentially Encoded offset 16 Quadrature Amplitude Modulation (offset 16 QAM and MC with differentially Encoded offset quadrature phase shift keying modulation (OQPSK using guard interval for frequency flat Rician channel in the presence of Additive White Gaussian Noise (AWGN. The performance of multitone systems is also compared with equivalent differentially Encoded offset quadrature amplitude modulation (Offset 16 QAM and differentially Encoded offset quadrature phase shift keying modulation (OQPSKwith and without guard interval in the same fading environment.
DEFF Research Database (Denmark)
Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso
2010-01-01
We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self......-heterodyne receivers perform down-conversion of radio frequency (RF) subcarrier signal. A theoretical model including noise analysis is constructed to calculate the Q factor and estimate the BER performance. Furthermore, we experimentally validate our prediction in the theoretical modeling. Both the experimental and...... theoretical results show that the PM link offers superior OSNR receiver sensitivity performance (higher than 6 dB) over the quadrature biased IM counterpart....
Directory of Open Access Journals (Sweden)
Rashmi Mongre
2014-09-01
Full Text Available In digital communication system design, the main objective is to receive data as similar as the data sent from the transmitter. It is important to analyze the system in term of probability of error to view the system's performance. Each modulation technique has different performance while dealing with signals, which normally are affected with noise. General explanation for BER is explained and simulated in this paper. It focuses on comparative performance analysis of BPSK, QPSK, 8PSK and 16PSK i.e. Mary PSK system where the value of M=2, 4, 8 and 16. VHSIC Hardware Description Language (HDL was used for committal to writing of the design. The Xilinx ISE 8.1i tool was used for synthesis of this project. ModelSim PE Student Edition 10.3c is used for functional simulation and logic verification of analog waveforms. The BER curves for different digital modulation techniques which are obtained after simulation are compared with theoretical curves. All the BER calculations are done assuming the channel as AWGN channel
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations. PMID:26560913
Bit error rate optimization of an acousto-optic tracking system for free-space laser communications
Sofka, J.; Nikulin, V.
2006-02-01
Optical communications systems have been gaining momentum with the increasing demand for transmission bandwidth in the last several years. Optical cable based solutions have become an attractive alternative to copper based system in the most bandwidth demanding applications due to increased bandwidth and longer inter-repeater distances. The promise of similar benefits over radio communications systems is driving the research into free space laser communications. Along with increased communications bandwidth, a free space laser communications system offers lower power consumption and the possibility for covert data links due to the concentration of the energy of the laser into a narrow beam. A narrow beam, however, results in a requirement for much more accurate and agile steering, so that a data link can be maintained in a scenario of communication platforms in relative motion or in the presence of vibrations. This paper presents a laser beam tracking system employing an acousto-optic cell capable of deflecting a laser beam at a very high rate (order of tens of kHz). The tracking system is subjected to vibrations to simulate a realistic implementation, resulting in the increase of BER. The performance of the system can be significantly improved through digital control. A constant gain controller is complemented by a Kalman filter the parameters of which are optimized to achieve the lowest possible BER for a given vibrations spectrum.
光纤信道压力对实际量子密钥分发误码率的影响%Influence of Fibre Channel Pressure on Actual Quantum Bit Error Rate
Institute of Scientific and Technical Information of China (English)
吴佳楠; 魏荣凯; 陈丽; 周成; 朱德新; 宋立军
2015-01-01
An actual peer to peer quantum key distribution experimental system of polarization encoding was built based on BB84 protocol under pressure testing conditions.Fibre channel pressure experiment about quantum key distribution was completed.The theoretical model of quantum bit error rate was established with positive operator valued measurement method.The research results show that under the same pressure,bit error rate increased with the increase of angle,the result was as theoretical arithmetic expected;and at the same angle,the bit error rate showed a gentle shock upward trend with the increase of pressure,and when the pressure exceeded a critical value,the bit error rate increased rapidly,approaching the limit,forcing the quantum key distribution system to reestablish a connection.%基于 BB84协议原理，构建压力环境下偏振编码的点对点实际量子密钥分发系统，进行光纤信道压力作用下的量子密钥分发实验，并采用半正定算子测量方法建立误码率分析模型。实验结果表明：相同作用力下，误码率随作用角度的增加而增大，与仿真结果相同；相同作用角下，误码率随作用力的增加呈平缓的震荡上升趋势，但当作用力超过某一临界值时，误码率会迅速提高，逼近极限值，迫使量子密钥分发系统重新建立连接。
Ultra low bit-rate speech coding
Ramasubramanian, V
2015-01-01
"Ultra Low Bit-Rate Speech Coding" focuses on the specialized topic of speech coding at very low bit-rates of 1 Kbits/sec and less, particularly at the lower ends of this range, down to 100 bps. The authors set forth the fundamental results and trends that form the basis for such ultra low bit-rates to be viable and provide a comprehensive overview of various techniques and systems in literature to date, with particular attention to their work in the paradigm of unit-selection based segment quantization. The book is for research students, academic faculty and researchers, and industry practitioners in the areas of speech processing and speech coding.
SEU error rates in advanced digital CMOS
International Nuclear Information System (INIS)
Space-based electronics is exposed to cosmic radiations that result in bit reversal errors or Single Event Upsets. Those errors are generally taken into account through an error-rate quantification expressed in Expected errors per Bit-Day (EBD). A procedure to evaluate this EBD for CMOS memory devices is presented here and applied to a typical data set. This method arises from a development of the upset rate convolution integral. It can be applied to ground-based test data, and provide a realistic upset-rate estimate for space flight conditions. (D.L.). 20 refs., 7 figs
Rate Control for MPEG-4 Bit Stream
Institute of Scientific and Technical Information of China (English)
王振洲; 李桂苓
2003-01-01
For a very long time video processing dealt exclusively with fixed-rate sequences of rectangular shaped images. However, interest has been recently moving toward a more flexible concept in which the subject of the processing and encoding operations is a set of visual elements organized in both time and space in a flexible and arbitrarily complex way. The moving picture experts group (MPEG-4) standard supports this concept and its verification model (VM) encoder has adopted scalable rate control (SRC) as the rate control scheme, which is based on the spatial domain and compatible with constant bit rate (CBR) and variable bit rate (VBR). In this paper,a new rate control algorithm based on the DCT domain instead of the pixel domain is presented. More-over, macroblock level rate control scheme to compute the quantization step for each macroblock has been adopted. The experimental results show that the new algorithm can achieve a much better result than the original one in both peak signal-to-noise ratio (PSNR) and the coding bits, and that the new algorithm is more flexible than test model 5 (TM5) rate control algorithm.
Institute of Scientific and Technical Information of China (English)
张宇; 杨益新; 田丰
2014-01-01
浮标声纳受到功耗、体积和硬件复杂度等因素的限制，通常将接收数据通过无线信道发送到终端设备进行处理。由于多径传播、衰落特性以及多普勒效应等众多因素的干扰，信号在无线通信传递中会产生误码并影响最终系统性能。针对复杂传输信道环境下的浮标声纳系统，研究了误码率对系统的多目标方位估计性能的影响，并通过计算机仿真给出了误码率允许的门限。%Due to the limitations of volume,hardware complexity and power consumption,sonar buoy equipment transmits receives signals through wireless channel to the processing terminal on airplane or ship. The multipath effects,fading characteristics,and Doppler spread of communication channel will cause bit error and finally influence the performance of sonar signal processing. In this paper,is focused the DOA estimation performance of sonar buoy under various bit error rates of communication system. The BER threshold of DOA estimation is obtained via Monte Carlo simulation to guide the design of whole sonar buoy systems.
Ingels, F. M.; Schoggen, W. O.
1982-01-01
The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.
Digital Signal Processing For Low Bit Rate TV Image Codecs
Rao, K. R.
1987-06-01
In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.
Continuous operation of high bit rate quantum key distribution
Dixon, A R; Yuan, Z. L.; Dynes, J. F.; Sharpe, A. W.; Shields, A. J.
2010-01-01
We demonstrate a quantum key distribution with a secure bit rate exceeding 1 Mbit/s over 50 km fiber averaged over a continuous 36-hours period. Continuous operation of high bit rates is achieved using feedback systems to control path length difference and polarization in the interferometer and the timing of the detection windows. High bit rates and continuous operation allows finite key size effects to be strongly reduced, achieving a key extraction efficiency of 96% compared to keys of infi...
Circuit and interconnect design for high bit-rate applications
Veenstra, H.
2006-01-01
This thesis presents circuit and interconnect design techniques and design flows that address the most difficult and ill-defined aspects of the design of ICs for high bit-rate applications. Bottlenecks in interconnect design, circuit design and on-chip signal distribution for high bit-rate applications are analysed, and solutions that circumvent these bottlenecks are presented. The methodologies presented indicate whether certain target bit-rates and operating frequencies can be realised in t...
Circuit and interconnect design for high bit-rate applications
Veenstra, H.
2006-01-01
This thesis presents circuit and interconnect design techniques and design flows that address the most difficult and ill-defined aspects of the design of ICs for high bit-rate applications. Bottlenecks in interconnect design, circuit design and on-chip signal distribution for high bit-rate applicati
Payment Error Rate Measurement (PERM)
U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...
Euclidean Geometry Codes, minimum weight words and decodable error-patterns using bit-flipping
DEFF Research Database (Denmark)
Høholdt, Tom; Justesen, Jørn; Jonsson, Bergtor
2005-01-01
We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns.......We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns....
Comprehensive Error Rate Testing (CERT)
U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...
Measuring verification device error rates
International Nuclear Information System (INIS)
A verification device generates a Type I (II) error when it recommends to reject (accept) a valid (false) identity claim. For a given identity, the rates or probabilities of these errors quantify random variations of the device from claim to claim. These are intra-identity variations. To some degree, these rates depend on the particular identity being challenged, and there exists a distribution of error rates characterizing inter-identity variations. However, for most security system applications we only need to know averages of this distribution. These averages are called the pooled error rates. In this paper the authors present the statistical underpinnings for the measurement of pooled Type I and Type II error rates. The authors consider a conceptual experiment, ''a crate of biased coins''. This model illustrates the effects of sampling both within trials of the same individual and among trials from different individuals. Application of this simple model to verification devices yields pooled error rate estimates and confidence limits for these estimates. A sample certification procedure for verification devices is given in the appendix
Comodulation masking release in bit-rate reduction systems
DEFF Research Database (Denmark)
Vestergaard, Martin D.; Rasmussen, Karsten Bo; Poulsen, Torben
1999-01-01
It has been suggested that the level dependence of the upper masking slopebe utilised in perceptual models in bit-rate reduction systems. However,comodulation masking release (CMR) phenomena lead to a reduction of themasking effect when a masker and a probe signal are amplitude modulated withthe ...
Alternatives to speech in low bit rate communication systems
Lopes, Cristina Videira; Aguiar, Pedro M. Q.
2010-01-01
This paper describes a framework and a method with which speech communication can be analyzed. The framework consists of a set of low bit rate, short-range acoustic communication systems, such as speech, but that are quite different from speech. The method is to systematically compare these systems according to different objective functions such as data rate, computational overhead, psychoacoustic effects and semantics. One goal of this study is to better understand the nature of human commun...
Multiple-bit-rate clock recovery circuit: theory
International Nuclear Information System (INIS)
The multiple-bit-rate clock recovery circuit has been recently proposed as a part of the communications packet switch. All packets must be the same length and be preceded by the frequency header, which is a number of consecutive ones (return-to-zero mode). The header is compared with the internal clock, and the result is used to set output clock frequency. The clock rate is defined by a number of fluxons propagating in ring oscillator, which is a close circular Josephson transmission line. The theory gives a bit rate bandwidth as a function of internal clock frequency, header length and silence time (maximum number of consecutive zeros in the packet). (author)
Ingels, F.; Schoggen, W. O.
1981-01-01
The various methods of high bit transition density encoding are presented, their relative performance is compared in so far as error propagation characteristics, transition properties and system constraints are concerned. A computer simulation of the system using the specific PN code recommended, is included.
Obtaining Reliable Bit Rate Measurements in SNMP-Managed Networks
Carlsson, Patrik; Fiedler, Markus; Tutschku, Kurt; Chevul, Stefan; Nilsson, Arne A.
2002-01-01
The Simple Network Management Protocol, SNMP, is the most widespread standard for Internet management. As SNMP stacks are available on most equipment, this protocol has to be considered when it comes to performance management, traffic engineering and network control. However, especially when using the predominant version 1, SNMPv1, special care has to be taken to avoid erroneous results when calculating bit rates. In this work, we evalute six off-the-shelf network components. We demonstrate t...
Comodulation masking release in bit-rate reduction systems
DEFF Research Database (Denmark)
Vestergaard, Martin David; Rasmussen, Karsten Bo; Poulsen, Torben
1999-01-01
It has been suggested that the level dependence of the upper masking slope be utilized in perceptual models in bit-rate reduction systems. However, comodulation masking release (CMR) phenomena lead to a reduction of the masking effect when a masker and a probe signal are amplitude modulated with...... the same frequency. In bit-rate reduction systems the masker would be the audio signal and the probe signal would represent the quantization noise. Masking curves have been determined for sinusoids and 1-Bark-wide noise maskers in order to investigate the risk of CMR, when quantizing depths are fixed...... of 0.75. A CMR of up to 10 dB was obtained at a distance of 6 Bark above the masker. The amount of CMR was found to depend on the presentation level of the masker; a higher masker level leads to a higher CMR effect. Hence, the risk of CMR affecting the subjective performance of bit-rate reduction...
Very low bit rate voice for packetized mobile applications
International Nuclear Information System (INIS)
This paper reports that transmitting digital voice via packetized mobile communications systems that employ relatively short packet lengths and narrow bandwidths often necessitates very low bit rate coding of the voice data. Sandia National Laboratories is currently developing an efficient voice coding system operating at 800 bits per second (bps). The coding scheme is a modified version of the 2400 bps NSA LPC-10e standard. The most significant modification to the LPC-10e scheme is the vector quantization of the line spectrum frequencies associated with the synthesis filters. An outline of a hardware implementation for the 800 bps coder is presented. The speech quality of the coder is generally good, although speaker recognition is not possible. Further research is being conducted to reduce the memory requirements and complexity of the vector quantizer, and to increase the quality of the reconstructed speech. This work may be of use dealing with nuclear materials
Biometric Quantization through Detection Rate Optimized Bit Allocation
Directory of Open Access Journals (Sweden)
C. Chen
2009-01-01
Full Text Available Extracting binary strings from real-valued biometric templates is a fundamental step in many biometric template protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Previous work has been focusing on the design of optimal quantization and coding for each single feature component, yet the binary string—concatenation of all coded feature components—is not optimal. In this paper, we present a detection rate optimized bit allocation (DROBA principle, which assigns more bits to discriminative features and fewer bits to nondiscriminative features. We further propose a dynamic programming (DP approach and a greedy search (GS approach to achieve DROBA. Experiments of DROBA on the FVC2000 fingerprint database and the FRGC face database show good performances. As a universal method, DROBA is applicable to arbitrary biometric modalities, such as fingerprint texture, iris, signature, and face. DROBA will bring significant benefits not only to the template protection systems but also to the systems with fast matching requirements or constrained storage capability.
A Novel Rate Control Scheme for Constant Bit Rate Video Streaming
Directory of Open Access Journals (Sweden)
Venkata Phani Kumar M
2015-08-01
Full Text Available In this paper, a novel rate control mechanism is proposed for constant bit rate video streaming. The initial quantization parameter used for encoding a video sequence is determined using the average spatio-temporal complexity of the sequence, its resolution and the target bit rate. Simple linear estimation models are then used to predict the number of bits that would be necessary to encode a frame for a given complexity and quantization parameter. The experimental results demonstrate that our proposed rate control mechanism significantly outperforms the existing rate control scheme in the Joint Model (JM reference software in terms of Peak Signal to Noise Ratio (PSNR and consistent perceptual visual quality while achieving the target bit rate. Furthermore, the proposed scheme is validated through implementation on a miniature test-bed.
Smoothing variable-bit-rate video in an internetwork
Rexford, Jennifer; Towsley, Don
1997-10-01
The burstiness of compressed video complicates the provisioning of network resources for emerging multimedia services. For stored video applications, the server can smooth the variable-bit-rate stream by prefetching frames into the client playback buffer in advanced of each burst. Drawing on a priori knowledge of the frame lengths and client buffer size, such bandwidth smoothing techniques can minimize the peak and variability of the rate requirements while avoiding underflow and overflow of the playback buffer. However, in an internetworking environment, a single service provider typically does not control the entire path from the stored-video server to the client buffer. To develop efficient techniques for transmitting variable-bit- rate video across a portion of the route, we investigate bandwidth smoothing across a tandem of nodes, which may or may not include the server and client sites. We show that it is possible to compute an optimal transmission schedule for the tandem system by solving a collection of independent single-link problems. To develop efficient techniques for minimizing the network bandwidth requirements, we characterize how the peak rate varies as a function of the buffer allocation and the playback delay. Simulation experiments illustrate the subtle interplay between buffer space, playback delay, and bandwidth requirements for a collection of full-length video traces. These analytic and empirical results suggest effective guidelines for provisioning network services for the transmission of compressed video.
Energy Technology Data Exchange (ETDEWEB)
Ahmed, Moustafa F, E-mail: m.farghal@link.ne [Department of Physics, Faculty of Science, Minia University, 61519 El-Minia (Egypt)
2009-09-21
This paper reports on the influence of the transmission bit rate on the performance of optical fibre communication systems employing laser diodes subjected to high-speed direct modulation. The performance is evaluated in terms of the bit error rate (BER) and power penalty associated with increasing the transmission bit rate while keeping the transmission distance. The study is based on numerical analysis of the stochastic rate equations of the laser diode and takes into account noise mechanisms in the receiver. Correlation between BER and the Q-parameter of the received signal is presented. The relative contributions of the transmitter noise and the circuit and shot noises of the receiver to BER are quantified as functions of the transmission bit rate. The results show that the power penalty at BER = 10{sup -9} required to keep the transmission distance increases moderately with the increase in the bit rate near 1 Gbps and at high bias currents. In this regime, the shot noise is the main contributor to BER. At higher bit rates and lower bias currents, the power penalty increases remarkably, which comes mainly from laser noise induced by the pseudorandom bit-pattern effect.
International Nuclear Information System (INIS)
This paper reports on the influence of the transmission bit rate on the performance of optical fibre communication systems employing laser diodes subjected to high-speed direct modulation. The performance is evaluated in terms of the bit error rate (BER) and power penalty associated with increasing the transmission bit rate while keeping the transmission distance. The study is based on numerical analysis of the stochastic rate equations of the laser diode and takes into account noise mechanisms in the receiver. Correlation between BER and the Q-parameter of the received signal is presented. The relative contributions of the transmitter noise and the circuit and shot noises of the receiver to BER are quantified as functions of the transmission bit rate. The results show that the power penalty at BER = 10-9 required to keep the transmission distance increases moderately with the increase in the bit rate near 1 Gbps and at high bias currents. In this regime, the shot noise is the main contributor to BER. At higher bit rates and lower bias currents, the power penalty increases remarkably, which comes mainly from laser noise induced by the pseudorandom bit-pattern effect.
Optical Switching and Bit Rates of 40 Gbit/s and above
DEFF Research Database (Denmark)
Ackaert, A.; Demester, P.; O'Mahony, M.;
2003-01-01
Optical switching in WDM networks introduces additional aspects to the choice of single channel bit rates compared to WDM transmission systems. The mutual impact of optical switching and bit rates of 40 Gbps and above is discussed....
Error tolerance of topological codes with independent bit-flip and measurement errors
Andrist, Ruben S.; Katzgraber, Helmut G.; Bombin, H.; Martin-Delgado, M. A.
2016-07-01
Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011), 10.1088/1367-2630/13/8/083006.
He, Guang-Ping
2005-01-01
Though it was proven that secure quantum sealing of a single classical bit is impossible in principle, here we propose an unconditionally secure quantum sealing protocol which seals a classical bit string. Any reader can obtain each bit of the sealed string with an arbitrarily small error rate, while reading the string is detectable. The protocol is simple and easy to be implemented. The possibility of using this protocol to seal a single bit in practical is also discussed.
The application of low-bit-rate encoding techniques to digital satellite systems
Rowbotham, T. R.; Niwa, K.
This paper describes the INTELSAT-funded development on low-bit-rate voice encoding techniques. Adaptive Differential Pulse Code Modulation (ADPCM), Nearly Instantaneous Companding (NIC) and Continuously Variable Slope Delta Modulation (CVSD). Subjective and objective evaluation results, with and without transmission errors are presented, primarily for 32 kbit/s per voice channel. A part of the paper is devoted to the interfacing of ADPCM, NIC and CVSD with terrestrial ISDM and satellite networks, the frame structure and how signalling can be accommodated, and the compatibility with other voice-associated digital processors such as DSI and Echo Cancellers.
Extremely Low Bit-Rate Nearest Neighbor Search Using a Set Compression Tree.
Arandjelović, Relja; Zisserman, Andrew
2014-12-01
The goal of this work is a data structure to support approximate nearest neighbor search on very large scale sets of vector descriptors. The criteria we wish to optimize are: (i) that the memory footprint of the representation should be very small (so that it fits into main memory); and (ii) that the approximation of the original vectors should be accurate. We introduce a novel encoding method, named a Set Compression Tree (SCT), that satisfies these criteria. It is able to accurately compress 1 million descriptors using only a few bits per descriptor. The large compression rate is achieved by not compressing on a per-descriptor basis, but instead by compressing the set of descriptors jointly. We describe the encoding, decoding and use for nearest neighbor search, all of which are quite straightforward to implement. The method, tested on standard benchmarks (SIFT1M and 80 Million Tiny Images), achieves superior performance to a number of state-of-the-art approaches, including Product Quantization, Locality Sensitive Hashing, Spectral Hashing, and Iterative Quantization. For example, SCT has a lower error using 5 bits than any of the other approaches, even when they use 16 or more bits per descriptor. We also include a comparison of all the above methods on the standard benchmarks. PMID:26353147
High bit rate germanium single photon detectors for 1310nm
Seamons, J. A.; Carroll, M. S.
2008-04-01
There is increasing interest in development of high speed, low noise and readily fieldable near infrared (NIR) single photon detectors. InGaAs/InP Avalanche photodiodes (APD) operated in Geiger mode (GM) are a leading choice for NIR due to their preeminence in optical networking. After-pulsing is, however, a primary challenge to operating InGaAs/InP single photon detectors at high frequencies1. After-pulsing is the effect of charge being released from traps that trigger false ("dark") counts. To overcome this problem, hold-off times between detection windows are used to allow the traps to discharge to suppress after-pulsing. The hold-off time represents, however, an upper limit on detection frequency that shows degradation beginning at frequencies of ~100 kHz in InGaAs/InP. Alternatively, germanium (Ge) single photon avalanche photodiodes (SPAD) have been reported to have more than an order of magnitude smaller charge trap densities than InGaAs/InP SPADs2, which allowed them to be successfully operated with passive quenching2 (i.e., no gated hold off times necessary), which is not possible with InGaAs/InP SPADs, indicating a much weaker dark count dependence on hold-off time consistent with fewer charge traps. Despite these encouraging results suggesting a possible higher operating frequency limit for Ge SPADs, little has been reported on Ge SPAD performance at high frequencies presumably because previous work with Ge SPADs has been discouraged by a strong demand to work at 1550 nm. NIR SPADs require cooling, which in the case of Ge SPADs dramatically reduces the quantum efficiency of the Ge at 1550 nm. Recently, however, advantages to working at 1310 nm have been suggested which combined with a need to increase quantum bit rates for quantum key distribution (QKD) motivates examination of Ge detectors performance at very high detection rates where InGaAs/InP does not perform as well. Presented in this paper are measurements of a commercially available Ge APD
Chaos-based communications at high bit rates using commercial fibre-optic links
Argyris, Apostolos; Syvridis, Dimitris; Larger, Laurent; Annovazzi-Lodi, Valerio; Colet, Pere; Fischer, Ingo; García-Ojalvo, Jordi; Mirasso, Claudio R.; Pesquera, Luis; Shore, K. Alan
2005-11-01
Chaotic signals have been proposed as broadband information carriers with the potential of providing a high level of robustness and privacy in data transmission. Laboratory demonstrations of chaos-based optical communications have already shown the potential of this technology, but a field experiment using commercial optical networks has not been undertaken so far. Here we demonstrate high-speed long-distance communication based on chaos synchronization over a commercial fibre-optic channel. An optical carrier wave generated by a chaotic laser is used to encode a message for transmission over 120km of optical fibre in the metropolitan area network of Athens, Greece. The message is decoded using an appropriate second laser which, by synchronizing with the chaotic carrier, allows for the separation of the carrier and the message. Transmission rates in the gigabit per second range are achieved, with corresponding bit-error rates below 10-7. The system uses matched pairs of semiconductor lasers as chaotic emitters and receivers, and off-the-shelf fibre-optic telecommunication components. Our results show that information can be transmitted at high bit rates using deterministic chaos in a manner that is robust to perturbations and channel disturbances unavoidable under real-world conditions.
Up to 20 Gbit/s bit-rate transparent integrated interferometric wavelength converter
DEFF Research Database (Denmark)
Jørgensen, Carsten; Danielsen, Søren Lykke; Hansen, Peter Bukhave;
1996-01-01
We present a compact and optimised multiquantum-well based, integrated all-active Michelson interferometer for 26 Gbit/s optical wavelength conversion. Bit-rate transparent operation is demonstrated with a conversion penalty well below 0.5 dB at bit-rates ranging from 622 Mbit/s to 20 Gbit/s....
Enhanced bit rate-distance product impulse radio ultra-wideband over fiber link
DEFF Research Database (Denmark)
Rodes Lopez, Roberto; Jensen, Jesper Bevensee; Caballero Jambrina, Antonio; Yu, Xianbin; Pivnenko, Sergey; Tafur Monroy, Idelfonso
2010-01-01
We report on a record distance and bit rate-wireless impulse radio (IR) ultra-wideband (UWB) link with combined transmission over a 20 km long fiber link. We are able to improve the compliance with the regulated frequency emission mask and achieve bit rate-distance products as high as 16 Gbit/s·m....
Timing-Error Detection Design Considerations in Subthreshold: An 8-bit Microprocessor in 65 nm CMOS
Directory of Open Access Journals (Sweden)
Lauri Koskinen
2012-06-01
Full Text Available This paper presents the first known timing-error detection (TED microprocessor able to operate in subthreshold. Since the minimum energy point (MEP of static CMOS logic is in subthreshold, there is a strong motivation to design ultra-low-power systems that can operate in this region. However, exponential dependencies in subthreshold, require systems with either excessively large safety margins or that utilize adaptive techniques. Typically, these techniques include replica paths, sensors, or TED. Each of these methods adds system complexity, area, and energy overhead. As a run-time technique, TED is the only method that accounts for both local and global variations. The microprocessor presented in this paper utilizes adaptable error-detection sequential (EDS circuits that can adjust to process and environmental variations. The results demonstrate the feasibility of the microprocessor, as well as energy savings up to 28%, when using the TED method in subthreshold. The microprocessor is an 8-bit core, which is compatible with a commercial microcontroller. The microprocessor is fabricated in 65 nm CMOS, uses as low as 4.35 pJ/instruction, occupies an area of 50,000 μm^{2}, and operates down to 300 mV.
Region-of-interest determination and bit-rate conversion for H.264 video transcoding
Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan
2013-12-01
This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.
Towards the generation of random bits at terahertz rates based on a chaotic semiconductor laser
International Nuclear Information System (INIS)
Random bit generators (RBGs) are important in many aspects of statistical physics and crucial in Monte-Carlo simulations, stochastic modeling and quantum cryptography. The quality of a RBG is measured by the unpredictability of the bit string it produces and the speed at which the truly random bits can be generated. Deterministic algorithms generate pseudo-random numbers at high data rates as they are only limited by electronic hardware speed, but their unpredictability is limited by the very nature of their deterministic origin. It is widely accepted that the core of any true RBG must be an intrinsically non-deterministic physical process, e.g. measuring thermal noise from a resistor. Owing to low signal levels, such systems are highly susceptible to bias, introduced by amplification, and to small nonrandom external perturbations resulting in a limited generation rate, typically less than 100M bit/s. We present a physical random bit generator, based on a chaotic semiconductor laser, having delayed optical feedback, which operates reliably at rates up to 300Gbit/s. The method uses a high derivative of the digitized chaotic laser intensity and generates the random sequence by retaining a number of the least significant bits of the high derivative value. The method is insensitive to laser operational parameters and eliminates the necessity for all external constraints such as incommensurate sampling rates and laser external cavity round trip time. The randomness of long bit strings is verified by standard statistical tests.
Towards the generation of random bits at terahertz rates based on a chaotic semiconductor laser
Energy Technology Data Exchange (ETDEWEB)
Kanter, Ido [Minerva Center and Department of Physics, Bar-Ilan University, Ramat-Gan 52900 (Israel); Aviad, Yaara; Reidler, Igor; Cohen, Elad; Rosenbluh, Michael [Department of Physics, Jack and Pearl Resnick Institute for Advanced Technology, Bar-Ilan University, Ramat-Gan, 52900 Israel (Israel)
2010-06-01
Random bit generators (RBGs) are important in many aspects of statistical physics and crucial in Monte-Carlo simulations, stochastic modeling and quantum cryptography. The quality of a RBG is measured by the unpredictability of the bit string it produces and the speed at which the truly random bits can be generated. Deterministic algorithms generate pseudo-random numbers at high data rates as they are only limited by electronic hardware speed, but their unpredictability is limited by the very nature of their deterministic origin. It is widely accepted that the core of any true RBG must be an intrinsically non-deterministic physical process, e.g. measuring thermal noise from a resistor. Owing to low signal levels, such systems are highly susceptible to bias, introduced by amplification, and to small nonrandom external perturbations resulting in a limited generation rate, typically less than 100M bit/s. We present a physical random bit generator, based on a chaotic semiconductor laser, having delayed optical feedback, which operates reliably at rates up to 300Gbit/s. The method uses a high derivative of the digitized chaotic laser intensity and generates the random sequence by retaining a number of the least significant bits of the high derivative value. The method is insensitive to laser operational parameters and eliminates the necessity for all external constraints such as incommensurate sampling rates and laser external cavity round trip time. The randomness of long bit strings is verified by standard statistical tests.
Theoretical Study of Quantum Bit Rate in Free-Space Quantum Cryptography
Institute of Scientific and Technical Information of China (English)
MA Jing; ZHANG Guang-Yu; TAN Li-Ying
2006-01-01
The quantum bit rate is an important operating parameter in free-space quantum key distribution. We introduce the measuring factor and the sifting factor, and present the expressions of the quantum bit rate based on the ideal single-photon sources and the single-photon sources with Poisson distribution. The quantum bit rate is studied in the numerical simulation for the laser links between a ground station and a satellite in a low earth orbit. The results show that it is feasible to implement quantum key distribution between a ground station and a satellite in a low earth orbit.
Room temperature single-photon detectors for high bit rate quantum key distribution
International Nuclear Information System (INIS)
We report room temperature operation of telecom wavelength single-photon detectors for high bit rate quantum key distribution (QKD). Room temperature operation is achieved using InGaAs avalanche photodiodes integrated with electronics based on the self-differencing technique that increases avalanche discrimination sensitivity. Despite using room temperature detectors, we demonstrate QKD with record secure bit rates over a range of fiber lengths (e.g., 1.26 Mbit/s over 50 km). Furthermore, our results indicate that operating the detectors at room temperature increases the secure bit rate for short distances
ORDER-STATISTICS MINIMUM ERROR RATE DETECTOR
Boudjellal, Abdelwaheb; Abed-Meraim, Karim; Belouchrani, Adel; Ravier, Philippe
2014-01-01
A new methodology for random signal detection, in presenceof closely spaced interfering signals, is introduced, namelythe Order-Statistics Minimum Error Rate (OS-MER) detector.The latter is based on the minimization of the error probabilityinstead of minimizing only the miss probability for a ConstantFalse Alarm Rate (CFAR). Results show that the OS-MER detectoris well adapted to this specific problem and overcomes, inparticular, the CFAR-based one.
Re-use of Low Bandwidth Equipment for High Bit Rate Transmission Using Signal Slicing Technique
DEFF Research Database (Denmark)
Wagner, Christoph; Spolitis, S.; Vegas Olmos, Juan José;
: Massive fiber-to-the-home network deployment requires never ending equipment upgrades operating at higher bandwidth. We show effective signal slicing method, which can reuse low bandwidth opto-electronical components for optical communications at higher bit rates.......: Massive fiber-to-the-home network deployment requires never ending equipment upgrades operating at higher bandwidth. We show effective signal slicing method, which can reuse low bandwidth opto-electronical components for optical communications at higher bit rates....
Kaiser, F.; Aktas, D.; Fedrici, B.; Lunghi, T.; Labonté, L.; Tanzilli, S.
2016-06-01
We demonstrate an experimental method for measuring energy-time entanglement over almost 80 nm spectral bandwidth in a single shot with a quantum bit error rate below 0.5%. Our scheme is extremely cost-effective and efficient in terms of resources as it employs only one source of entangled photons and one fixed unbalanced interferometer per phase-coded analysis basis. We show that the maximum analysis spectral bandwidth is obtained when the analysis interferometers are properly unbalanced, a strategy which can be straightforwardly applied to most of today's experiments based on energy-time and time-bin entanglement. Our scheme has therefore a great potential for boosting bit rates and reducing the resource overhead of future entanglement-based quantum key distribution systems.
A novel dynamic frame rate control algorithm for H.264 low-bit-rate video coding
Institute of Scientific and Technical Information of China (English)
Yang Jing; Fang Xiangzhong
2007-01-01
The goal of this paper is to improve human visual perceptual quality as well as coding efficiency of H.264 video at low bit rate conditions by adaptively adjusting the number of skipped frames. The encoding frames ale selected according to the motion activity of each frame and the motion accumulation of successive frames. The motion activity analysis is based on the statistics of motion vectors and with consideration of the characteristics of H. 264 coding standard. A prediction model of motion accumulation is proposed to reduce complex computation of motion estimation. The dynamic encoding frame rate control algorithm is applied to both the frame level and the GOB (Group of Macroblocks) level. Simulation is done to compare the performance of JM76 with the proposed frame level scheme and GOB level scheme.
Acceptable bit-rates for human face identification from CCTV imagery
Tsifouti, Anastasia; Triantaphillidou, Sophie; Bilissi, Efthimia; Larabi, Mohamed-Chaker
2013-01-01
The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence of enough facial information remaining in the compressed image to allow a specialist to identify a person. The investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4) Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress, requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal `average' bit-rates.
Generating Key Streams in infrastructure WLAN using bit rate
Directory of Open Access Journals (Sweden)
R.Buvaneswari,
2010-12-01
Full Text Available Due to the rapid growth of wireless networking, the fallible security issues of the 802.11 standard have come under close scrutiny. There are serious security issues that need to be sorted out before everyone is willing to transmit valuable corporate information on a wireless network. This report focuses on inherent flaws in wired equivalent privacy protocol (WEPused by the 802.11 standard, Temporal key Integrity protocol(TKIPwhich is considered an interim solution to legacy 802.11 equipment. Counter Mode /CBCMAC protocol which is based on Advanced Encryption Standard (AES will not work on many of the current shipping cards which are based on 802.11b/g.This paper proposes an enhancement to TKIP in accordance with transmission rate supported by physical Layer Convergence Protocol(PCLP and showsenhanced pattern of key streams generated from TKIP in order to avoid key reuse during the time of encryption and decryption of pay load.
Error-rate performance analysis of opportunistic regenerative relaying
Tourki, Kamel
2011-09-01
In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.
Adaptive Bit Rate Video Streaming Through an RF/Free Space Optical Laser Link
Directory of Open Access Journals (Sweden)
A. Akbulut
2010-06-01
Full Text Available This paper presents a channel-adaptive video streaming scheme which adjusts video bit rate according to channel conditions and transmits video through a hybrid RF/free space optical (FSO laser communication system. The design criteria of the FSO link for video transmission to 2.9 km distance have been given and adaptive bit rate video streaming according to the varying channel state over this link has been studied. It has been shown that the proposed structure is suitable for uninterrupted transmission of videos over the hybrid wireless network with reduced packet delays and losses even when the received power is decreased due to weather conditions.
A 14-bit 200-MS/s time-interleaved ADC with sample-time error calibration
Institute of Scientific and Technical Information of China (English)
Zhang Yiwen; Chen Chixiao; Yu Bei; Ye Fan; Ren Junyan
2012-01-01
Sample-time error between channels degrades the resolution of time-interleaved analog-to-digital converters (TIADCs).A calibration method implemented in mixed circuits with low complexity and fast convergence is proposed in this paper.The algorithm for detecting sample-time error is based on correlation and widely applied to wide-sense stationary input signals.The detected sample-time error is corrected by a voltage-controlled sampling switch.The experimental result of a 2-channel 200-MS/s 14-bit TIADC shows that the signal-to-noise and distortion ratio improves by 19.1 dB,and the spurious-free dynamic range improves by 34.6 dB for a 70.12-MHz input after calibration.The calibration convergence time is about 20000 sampling intervals.
International Nuclear Information System (INIS)
The results of mathematical simulation of propagation of optical signals in high-bit-rate dispersion-controlled fibreoptic communication lines are presented. Information was coded by using the return-to-zero differential phase-shift keying (RZ DPSK) format to modulate an optical signal. A number of particular configurations of optical communication lines are optimised in terms of the bit error rate (BER). It is shown that the signal propagation range considerably increases compared to that in the case of a standard return-to-zero on - off keying (RZ OOK) format. (fibreoptic communication)
An Improved Frame-Layer Bit Allocation Scheme for H.264/AVC Rate Control
Institute of Scientific and Technical Information of China (English)
LIN Gui-xu; ZHENG Shi-bao; ZHU Liang-jia
2009-01-01
In this paper, we aim at improving the video quality degradation due to high motions or scene changes. An improved frame-layer bit allocation scheme for H.264/AVC rate control is proposed. First, current frame is pre-encoded in 16×16 modes with a fixed quantization parameter (QP). The frame coding complexity is then measured based on the resulting bits and peak signal-to-ratio (PSNR) in the pre-coding stage. Finally, a bit budgetis calculated for current frame according to its coding complexity and inter-frame PSNR fluctuation, combined with the buffer status. Simulation results show that, in comparison with the H.264 adopted rate control scheme, our method is more efficient to suppress the sharp PSNR drops caused by high motions and scene changes. The visual quality variations in a sequence are also relieved.
DESIGN ISSUES FOR BIT RATE-ADAPTIVE 3R O/E/OTRANSPONDER IN INTELLIGENT OPTICAL NETWORKS
Institute of Scientific and Technical Information of China (English)
朱栩; 曾庆济; 杨旭东; 刘逢清; 肖石林
2002-01-01
This paper reported the design and implementation of a bit rate-adaptive Optical-Electronic-Optical (O/E/O) transponder accomplishing almost full data rate transparency up to 2.5 Gb/s with 3R (Reamplifying, Reshaping and Retiming) processing in electronic domain. Based on the chipsets performing clock recovery in several continuous bit rate ranges, a clock and data regenerating circuit self-adaptive to the bit rate of input signal was developed. Key design issues were presented, laying stress on the functional building blocks and scheme for the bit rate-adaptive retiming circuit. The experimental results show a good scalability performance.
Influence of the FEC Channel Coding on Error Rates and Picture Quality in DVB Baseband Transmission
Directory of Open Access Journals (Sweden)
T. Kratochvil
2006-09-01
Full Text Available The paper deals with the component analysis of DTV (Digital Television and DVB (Digital Video Broadcasting baseband channel coding. Used FEC (Forward Error Correction error-protection codes principles are shortly outlined and the simulation model applied in Matlab is presented. Results of achieved bit and symbol error rates and corresponding picture quality evaluation analysis are presented, including the evaluation of influence of the channel coding on transmitted RGB images and their noise rates related to MOS (Mean Opinion Score. Conclusion of the paper contains comparison of DVB channel codes efficiency.
Institute of Scientific and Technical Information of China (English)
Dong Jian-Ji; Zhang Xin-Liang; Huang De-Xiu
2008-01-01
This paper proposes and simulates a novel all-optical error-bit amplitude monitor based on cross-gain modulation and four-wave mixing in cascaded semiconductor optical amplifiers (SOAs),which function as logic NOT and logic AND,respectively.The proposed scheme is successfully simulated for 40 Gb/s return-to-zero (RZ) signal with different duty cycles.In the first stage,the SOA is followed by a detuning filter to accelerate the gain recovery as well as improve the extinction ratio.A clock probe signal is used to avoid the edge pulse-pairs in the output waveform.Among these RZ formats,33% RZ format is preferred to obtain the largest eye opening.The normalized error amplitude,defined as error bit amplitude over the standard mark amplitude,has a dynamic range from 0.1 to 0.65 for all RZ formats.The simulations show small input power dynamic range because of the nonlinear gain variation in the first stage.This scheme is competent for nonreturn-to-zero format at 10Gb/s as well.
A comprehensive error rate for multiple testing
Meskaldji, Djalel Eddine; Morgenthaler, Stephan
2011-01-01
In multiple testing, a variety of metrics have been introduced to control the false discoveries occurrences such as the Family-Wise Error Rate (FWER), the False Discovery Rate (FDR), the False Exceedence Rate (FER), etc. We present a way to combine and extend these metrics and show how to control them. The new concept considers the relationship between the number of rejections and the number of false positives by introducing a quantity defined by the number of false positives divided by a function of the number of rejections. We call this quantity the Scaled False Discovery Proportion (SFDP). This quantity is used to define two new false positive metrics: the Scaled Tail Probability (STP) and the Scaled Expected Value (SEV). We give procedures that control these two new error rates under different assumptions. With some particular cases of the scaling function, these two metrics cover well known false positives metrics such as the FWER, the k-FWER, the FDR, the FER and many others. We also propose some exampl...
Multicenter Assessment of Gram Stain Error Rates.
Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert
2016-06-01
Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. PMID:26888900
Soury, Hamza
2012-06-01
This letter considers the average bit error probability of binary coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closed form expression in terms of the Fox\\'s H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading and Nakagami-m fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters. © 2012 IEEE.
Exact Output Rate of Generalized Peres Algorithm for Generating Random Bits from Loaded Dice
Sung-il Pae
2013-01-01
We report a computation of the exact output rate of recently-discovered generalization of Peres algorithm for generating random bits from loaded dice. Instead of resorting to brute-force computation for all possible inputs, which becomes quickly impractical as the input size increases, we compute the total output length on equiprobable sets of inputs by dynamic programming using a recursive formula.
Scalable In-Band Optical Notch-Filter Labeling for Ultrahigh Bit Rate Optical Packet Switching
DEFF Research Database (Denmark)
Medhin, Ashenafi Kiros; Galili, Michael; Oxenløwe, Leif Katsuo
2014-01-01
We propose a scalable in-band optical notch-filter labeling scheme for optical packet switching of high-bit-rate data packets. A detailed characterization of the notch-filter labeling scheme and its effect on the quality of the data packet is carried out in simulation and verified by experimental...
Error-free 5.1 Tbit/s data generation on a single-wavelength channel using a 1.28 Tbaud symbol rate
DEFF Research Database (Denmark)
Mulvad, Hans Christian Hansen; Galili, Michael; Oxenløwe, Leif Katsuo; Hu, Hao; Clausen, Anders; Jensen, Jesper Bevensee; Peucheret, Christophe; Jeppesen, Palle
2009-01-01
We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER......We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER...
Methods for Reducing the Bit Rate in G.729 Speech Codec
Directory of Open Access Journals (Sweden)
ABHIJIT MAIDAMWAR,
2011-04-01
Full Text Available Conjugate structure algebraic CELP (G.729 is a voice codec that compresses speech signal based on model parameter of human voice. G.729 is a 8kbits/s speech coder. This paper deals with the reduction ofthe bit rate in G.729 coder while maintaining the same quality of speech. It proposes two methods; one is to send less number bits if there is no voice detection in the signal. Another is conditional search in fixed codebook which improves the speech quality.
Directory of Open Access Journals (Sweden)
S. Chris Prema
2015-01-01
Full Text Available A rate request sequenced bit loading reallocation algorithm is proposed. The spectral holes detected by spectrum sensing (SS in cognitive radio (CR are used by secondary users. This algorithm is applicable to Discrete Multitone (DMT systems for secondary user reallocation. DMT systems support different modulation on different subchannels according to Signal-to-Noise Ratio (SNR. The maximum bits and power that can be allocated to each subband is determined depending on the channel state information (CSI and secondary user modulation scheme. The spectral holes or free subbands are allocated to secondary users depending on the user rate request and subchannel capacity. A comparison is done between random rate request and sequenced rate request of secondary user for subchannel allocation. Through simulations it is observed that with sequenced rate request higher spectral efficiency is achieved with reduced complexity.
A Contourlet-Based Embedded Image Coding Scheme on Low Bit-Rate
Song, Haohao; Yu, Songyu
Contourlet transform (CT) is a new image representation method, which can efficiently represent contours and textures in images. However, CT is a kind of overcomplete transform with a redundancy factor of 4/3. If it is applied to image compression straightforwardly, the encoding bit-rate may increase to meet a given distortion. This fact baffles the coding community to develop CT-based image compression techniques with satisfactory performance. In this paper, we analyze the distribution of significant contourlet coefficients in different subbands and propose a new contourlet-based embedded image coding (CEIC) scheme on low bit-rate. The well-known wavelet-based embedded image coding (WEIC) algorithms such as EZW, SPIHT and SPECK can be easily integrated into the proposed scheme by constructing a virtual low frequency subband, modifying the coding framework of WEIC algorithms according to the structure of contourlet coefficients, and adopting a high-efficiency significant coefficient scanning scheme for CEIC scheme. The proposed CEIC scheme can provide an embedded bit-stream, which is desirable in heterogeneous networks. Our experiments demonstrate that the proposed scheme can achieve the better compression performance on low bit-rate. Furthermore, thanks to the contourlet adopted in the proposed scheme, more contours and textures in the coded images are preserved to ensure the superior subjective quality.
Multiple Bit Error Tolerant Galois Field Architectures Over GF (2m
Directory of Open Access Journals (Sweden)
Mahesh Poolakkaparambil
2012-06-01
Full Text Available Radiation induced transient faults like single event upsets (SEU and multiple event upsets (MEU in memories are well researched. As a result of the technology scaling, it is observed that the logic blocks are also vulnerable to malfunctioning when they are deployed in radiation prone environment. However, the current literature is lacking efforts to mitigate such issues in the digital logic circuits when exposed to natural radiation prone environment or when they are subjected to malicious attacks by an eavesdropper using highly energized particles. This may lead to catastrophe in critical applications such as widely used cryptographic hardware. In this paper, novel dynamic error correction architectures, based on the BCH codes, is proposed for correcting multiple errors which makes the circuits robust against radiation induced faults irrespective of the location of the errors. As a benchmark test case, the finite field multiplier circuit is considered as the functional block which can be the target for major attacks. The proposed scheme has the capability to handle stuck-at faults that are also a major cause of failure affecting the overall yield of a nano-CMOS integrated chip. The experimental results show that the proposed dynamic error detection and correction architecture results in 50% reduction in critical path delay by dynamically bypassing the error correction logic when no error is present. The area overhead for the larger multiplier is within 150% which is 33% lower than the TMR and comparable to 130% overhead of single error correcting Hamming and LDPC based techniques.
FPGA Based Test Module for Error Bit Evaluation in Serial Links
J. Kolouch
2006-01-01
A test module for serial links is described. In the link transmitter, one module generates pseudorandom pulse signal that is transmitted by the link. Second module located in the link receiver generates the same signal and compares it to the received signal. Errors caused by the signal transmission can be then detected and results sent to a master computer for further processing like statistical evaluation. The module can be used for long-term error monitoring without need for human operator ...
On the average capacity and bit error probability of wireless communication systems
Yilmaz, Ferkan
2011-12-01
Analysis of the average binary error probabilities and average capacity of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function-based unified expression for both average binary error probabilities and average capacity of single and multiple link communication with maximal ratio combining. It is a matter to note that the generic unified expression offered in this paper can be easily calculated and that is applicable to a wide variety of fading scenarios, and the mathematical formalism is illustrated with the generalized Gamma fading distribution in order to validate the correctness of our newly derived results. © 2011 IEEE.
FPGA Based Test Module for Error Bit Evaluation in Serial Links
Directory of Open Access Journals (Sweden)
J. Kolouch
2006-04-01
Full Text Available A test module for serial links is described. In the link transmitter, one module generates pseudorandom pulse signal that is transmitted by the link. Second module located in the link receiver generates the same signal and compares it to the received signal. Errors caused by the signal transmission can be then detected and results sent to a master computer for further processing like statistical evaluation. The module can be used for long-term error monitoring without need for human operator presence.
Setyawan, Iwan; Lagendijk, Reginald L.
2001-08-01
Digital video data distribution through the internet is becoming more common. Film trailers, video clips and even video footage from computer and video games are now seen as very powerful means to boost sales of the aforementioned products. These materials need to be protected to avoid copyright infringement issues. However, these materials are encoded at a low bit-rate to facilitate internet distribution and this poses a challenge to the watermarking operation. In this paper we present an extension to the Differential Energy Watermarking algorithm, to use it in low bit-rate environment. We present the extension scheme and its evaluate its performance in terms of watermark capacity, robustness and visual impact.
All-Optical Clock Recovery from NRZ-DPSK Signals at Flexible Bit Rates
International Nuclear Information System (INIS)
We propose and demonstrate all-optical clock recovery (CR) from nonreturn-to-zero differential phase-shift-keying (NRZ-DPSK) signals at different bit rates theoretically and experimentally. By pre-processing with a single optical filter, clock component can be enhanced significantly and thus clock signal can be extracted from the preprocessed signals, by cascading a CR unit with a semiconductor optical amplifier based fibre ring laser. Compared with the previous preprocessing schemes, the single filter is simple and suitable for different bit rates. The clock signals can be achieved with extinction ratio over 10 dB and rms timing jitter of 0.86 and 0.9 at 10 and 20 Gb/s, respectively. The output performances related to the bandwidth and the detuning of the filter are analysed. By simply using a filter with larger bandwidth, much higher operation can be achieved easily. (fundamental areas of phenomenology (including applications))
Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates
Linares, Irving (Inventor)
2016-01-01
Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.
Very Low Bit-Rate Video Coding Using Motion ompensated 3-D Wavelet Transform
Institute of Scientific and Technical Information of China (English)
无
1999-01-01
A new motion-compensated 3-D wavelet transform (MC-3DWT) video coding scheme is presented in thispaper. The new coding scheme has a good performance in average PSNR, compression ratio and visual quality of re-constructions compared with the existing 3-D wavelet transform (3DWT) coding methods and motion-compensated2-D wavelet transform (MC-WT) coding method. The new MC-3DWT coding scheme is suitable for very low bit-rate video coding.
Yilmaz, Ferkan
2014-04-01
The main idea in the moment generating function (MGF) approach is to alternatively express the conditional bit error probability (BEP) in a desired exponential form so that possibly multi-fold performance averaging is readily converted into a computationally efficient single-fold averaging - sometimes into a closed-form - by means of using the MGF of the signal-to-noise ratio. However, as presented in [1] and specifically indicated in [2] and also to the best of our knowledge, there does not exist an MGF-based approach in the literature to represent Wojnar\\'s generic BEP expression in a desired exponential form. This paper presents novel MGF-based expressions for calculating the average BEP of binary signalling over generalized fading channels, specifically by expressing Wojnar\\'s generic BEP expression in a desirable exponential form. We also propose MGF-based expressions to explore the amount of dispersion in the BEP for binary signalling over generalized fading channels.
Low Bit-Rate Image Compression using Adaptive Down-Sampling technique
Directory of Open Access Journals (Sweden)
V.Swathi
2011-09-01
Full Text Available In this paper, we are going to use a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass pre-filtering. The resulting down-sampled pre-filtered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then up-converts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass pre-filtering. The proposed compression approach of collaborative adaptive down-sampling and up-conversion (CADU outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that over-sampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.
International Nuclear Information System (INIS)
The rate of equipment failure caused events at nuclear power plants has been decreasing steadily, while the rate of human error caused events has not been decreasing nearly as rapidly. Using the skill-rule-knowledge based model of Rasmussen, many day-to-day operations are classified as skill-based actions and, consequently most of the errors in day-to-day operations are skill-based errors. In order to reduce the rate of skill-based errors, rules (procedures, forms, and checklists) have been developed. However, the increasing complexity and redundancy in rules leads to an increasing rate of errors in making rules and in carrying them out. To minimize the error rate, a compromise must be chosen. This article suggests a practical method to understand why human error caused events are not decreasing as rapidly as equipment caused events, and how to develop effective actions to reduce the human error rate
Power consumption analysis of constant bit rate data transmission over 3G mobile wireless networks
DEFF Research Database (Denmark)
Wang, Le; Ukhanova, Ann; Belyaev, Evgeny
2011-01-01
This paper presents the analysis of the power consumption of data transmission with constant bit rate over 3G mobile wireless networks. Our work includes the description of the transition state machine in 3G networks, followed by the detailed energy consumption analysis and measurement results of...... the radio link power consumption. Based on these description and analysis, we propose power consumption model. The power model was evaluated on the smartphone Nokia N900, which follows a 3GPP Release 5 and 6 supporting HSDPA/HSPA data bearers. Further we propose method of parameters selection for 3GPP...
Power consumption analysis of constant bit rate video transmission over 3G networks
DEFF Research Database (Denmark)
Ukhanova, Ann; Belyaev, Evgeny; Wang, Le;
2012-01-01
This paper presents an analysis of the power consumption of video data transmission with constant bit rate over 3G mobile wireless networks. The work includes the description of the radio resource control transition state machine in 3G networks, followed by a detailed power consumption analysis and...... measurements of the radio link power consumption. Based on this description and analysis, we propose our power consumption model. The power model was evaluated on a smartphone Nokia N900, which follows 3GPP Release 5 and 6 supporting HSDPA/HSUPA data bearers. We also propose a method for parameter selection...
A fast rise-rate, adjustable-mass-bit gas puff valve for energetic pulsed plasma experiments
Energy Technology Data Exchange (ETDEWEB)
Loebner, Keith T. K., E-mail: kloebner@stanford.edu; Underwood, Thomas C.; Cappelli, Mark A. [Stanford Plasma Physics Laboratory, Department of Mechanical Engineering, Stanford University, Stanford, California 94305 (United States)
2015-06-15
A fast rise-rate, variable mass-bit gas puff valve based on the diamagnetic repulsion principle was designed, built, and experimentally characterized. The ability to hold the pressure rise-rate nearly constant while varying the total overall mass bit was achieved via a movable mechanical restrictor that is accessible while the valve is assembled and pressurized. The rise-rates and mass-bits were measured via piezoelectric pressure transducers for plenum pressures between 10 and 40 psig and restrictor positions of 0.02-1.33 cm from the bottom of the linear restrictor travel. The mass-bits were found to vary linearly with the restrictor position at a given plenum pressure, while rise-rates varied linearly with plenum pressure but exhibited low variation over the range of possible restrictor positions. The ability to change the operating regime of a pulsed coaxial plasma deflagration accelerator by means of altering the valve parameters is demonstrated.
DEFF Research Database (Denmark)
Guerrero Gonzalez, Neil; Caballero Jambrina, Antonio; Borkowski, Robert; Arlunno, Valeria; Pham, Tien Thang; Rodes Lopez, Roberto; Zhang, Xu; Binti Othman, Maisara; Prince, Kamau; Yu, Xianbin; Jensen, Jesper Bevensee; Zibar, Darko; Tafur Monroy, Idelfonso
2011-01-01
Single reconfigurable DSP coherent receiver is experimentally demonstrated for mixed-format and bit-rates including QPSK, OFDM, IR-UWB for wireline and wireless signal types. Successful transmission over a deployed fiber link is achieved.......Single reconfigurable DSP coherent receiver is experimentally demonstrated for mixed-format and bit-rates including QPSK, OFDM, IR-UWB for wireline and wireless signal types. Successful transmission over a deployed fiber link is achieved....
An Improved Rate Matching Method for DVB Systems Through Pilot Bit Insertion
Directory of Open Access Journals (Sweden)
Seyed Mohammad-Sajad Sadough
2012-09-01
Full Text Available Classically, obtaining different coding rates in turbo codes is achieved through the well known puncturing procedure. However, puncturing is a critical procedure since the way the encoded sequence is punctured influences directly the decoding performance. In this work, we propose to mix the data sequence at the turbo encoder input inside the Digital Video Broadcasting (DVB standard with some pilot (perfectly known bits. By using variable pilot insertion rates, we achieve different coding rates with more flexibility. The proposed scheme is able to use a less complex mother code compared to that used in a conventional punctured turbo code. We also analyze the effect of different type of pilot insertion such as random and periodic schemes. Simulation results provided in the context of DVB show that in addition to providing flexible encoder design and reducing the encoder complexity, pilot insertion can improve slightly the performance of turbo decoders, compared to a conventional punctured turbo code.
Bit Rate Maximising Per-Tone Equalisation with Adaptive Implementation for DMT-Based Systems
Directory of Open Access Journals (Sweden)
Suchada Sitjongsataporn
2009-01-01
Full Text Available We present a bit rate maximising per-tone equalisation (BM-PTEQ cost function that is based on an exact subchannel SNR as a function of per-tone equaliser in discrete multitone (DMT systems. We then introduce the proposed BM-PTEQ criterion whose derivation for solution is shown to inherit from the methodology of the existing bit rate maximising time-domain equalisation (BM-TEQ. By solving a nonlinear BM-PTEQ cost function, an adaptive BM-PTEQ approach based on a recursive Levenberg-Marquardt (RLM algorithm is presented with the adaptive inverse square-root (iQR algorithm for DMT-based systems. Simulation results confirm that the performance of the proposed adaptive iQR RLM-based BM-PTEQ converges close to the performance of the proposed BM-PTEQ. Moreover, the performance of both these proposed BM-PTEQ algorithms is improved as compared with the BM-TEQ.
Natural language processing with dynamic classification improves P300 speller accuracy and bit rate
Speier, William; Arnold, Corey; Lu, Jessica; Taira, Ricky K.; Pouratian, Nader
2012-02-01
The P300 speller is an example of a brain-computer interface that can restore functionality to victims of neuromuscular disorders. Although the most common application of this system has been communicating language, the properties and constraints of the linguistic domain have not to date been exploited when decoding brain signals that pertain to language. We hypothesized that combining the standard stepwise linear discriminant analysis with a Naive Bayes classifier and a trigram language model would increase the speed and accuracy of typing with the P300 speller. With integration of natural language processing, we observed significant improvements in accuracy and 40-60% increases in bit rate for all six subjects in a pilot study. This study suggests that integrating information about the linguistic domain can significantly improve signal classification.
Technological Advancements and Error Rates in Radiation Therapy Delivery
International Nuclear Information System (INIS)
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)–conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women’s Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher’s exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01–0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08–0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique
Technological Advancements and Error Rates in Radiation Therapy Delivery
Energy Technology Data Exchange (ETDEWEB)
Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States)
2011-11-15
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase.
McInerney, Peter; Adams, Paul; Hadi, Masood Z
2014-01-01
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
Directory of Open Access Journals (Sweden)
Peter McInerney
2014-01-01
Full Text Available As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.
Total Dose Effects on Error Rates in Linear Bipolar Systems
Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent
2007-01-01
The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.
International Nuclear Information System (INIS)
We propose a novel notch-filtering scheme for bit-rate transparent all-optical NRZ-to-PRZ format conversion. The scheme is based on a two-degree-of-freedom optimally designed fiber Bragg grating. It is shown that a notch filter optimized for any specific operating bit rate can be used to realize high-Q-factor format conversion over a wide bit rate range without requiring any tuning. (paper)
DEFF Research Database (Denmark)
Vaa, Michael; Mikkelsen, Benny; Jepsen, Kim Stokholm;
1996-01-01
A novel bit-rate flexible and very power efficient all-optical demultiplexer using differential optical control of a monolithically integrated Michelson interferometer with MQW SOAs is demonstrated at 40 to 10 Gbit/s. Gain switched DFB lasers provide ultra stable data and control signals....
DEFF Research Database (Denmark)
Caballero Jambrina, Antonio; Guerrero Gonzalez, Neil; Arlunno, Valeria; Borkowski, Robert; Pham, Tien-Thang; Rodes Lopez, Roberto; Zhang, Xu; Binti Othman, Maisara; Prince, Kamau; Yu, Xianbin; Jensen, Jesper Bevensee; Zibar, Darko; Tafur Monroy, Idelfonso
A single, reconfigurable, digital coherent receiver is proposed and experimentally demonstrated for converged wireless and optical fiber transport. The capacity of reconstructing the full transmitted optical field allows for the demodulation of mixed modulation formats and bit-rates. We performed...
DEFF Research Database (Denmark)
Diez, S.; Mecozzi, A.; Mørk, Jesper
1999-01-01
We investigate the saturation properties of four-wave mixing of short optical pulses in a semiconductor optical amplifier. By varying the gain of the optical amplifier, we find a strong dependence of both conversion efficiency and signal-to-background ratio on pulse width and bit rate. In...
Error-rate performance analysis of incremental decode-and-forward opportunistic relaying
Tourki, Kamel
2011-06-01
In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.
Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael
2010-01-01
We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.
Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.
2014-01-01
Current approaches for building quantum computing devices focus on two-level quantum systems which nicely mimic the concept of a classical bit, albeit enhanced with additional quantum properties. However, rather than artificially limiting the number of states to two, the use of d-level quantum systems (qudits) could provide advantages for quantum information processing. Among other merits, it has recently been shown that multi-level quantum systems can offer increased stability to external di...
Error Growth Rate in the MM5 Model
Ivanov, S.; Palamarchuk, J.
2006-12-01
The goal of this work is to estimate model error growth rates in simulations of the atmospheric circulation by the MM5 model all the way from the short range to the medium range and beyond. The major topics are addressed to: (i) search the optimal set of parameterization schemes; (ii) evaluate the spatial structure and scales of the model error for various atmospheric fields; (iii) determine geographical regions where model errors are largest; (iv) define particular atmospheric patterns contributing to the fast and significant model error growth. Results are presented for geopotential, temperature, relative humidity and horizontal wind components fields on standard surfaces over the Atlantic-European region during winter 2002. Various combinations of parameterization schemes for cumulus, PBL, moisture and radiation are used to identify which one provides a lesser difference between the model state and analysis. The comparison of the model fields is carried out versus ERA-40 reanalysis of the ECMWF. Results show that the rate, at which the model error grows as well as its magnitude, varies depending on the forecast range, atmospheric variable and level. The typical spatial scale and structure of the model error also depends on the particular atmospheric variable. The distribution of the model error over the domain can be separated in two parts: the steady and transient. The first part is associated with a few high mountain regions including Greenland, where model error is larger. The transient model error mainly moves along with areas of high gradients in the atmospheric flow. Acknowledgement: This study has been supported by NATO Science for Peace grant #981044. The MM5 modelling system used in this study has been provided by UCAR. ERA-40 re-analysis data have been obtained from the ECMWF data server.
Institute of Scientific and Technical Information of China (English)
贾徽徽; 王潮; 顾健; 陆臻
2016-01-01
The existing error bit in the side channel attacks of ECC is difficult to avoid, and can’t be modiifed quickly. In this paper, a new search algorithm based on the Grover quantum search algorithm is proposed, which combines the Grover quantum search algorithm and the meet in the middle attack, and applies it to the side channel attack for ECC. The algorithm can solve the key problem ofn which hasM error bit inO steps. Compared with classical search algorithm, the computational complexity is greatly reduced. The analysis said that the success rate of modifying ECC attack error bit is 1, and the algorithm can effectively reduce the computational complexity.%在现有的针对ECC的侧信道攻击中，密钥出现错误bit难以避免，且无法快速修正。文章将Grover量子搜索算法和中间相遇攻击相结合，提出了一种新的搜索算法——Grover量子中间相遇搜索算法，并将其应用于针对ECC的侧信道攻击中。该算法可以在O规模为N且存在M个错误bit的密钥，与传统搜索算法的计算复杂度O(N M+1)相比较，计算复杂度大幅度降低。通过对算法进行分析表明，该方法能够以成功率1修正ECC攻击中出现的错误bit。
Evaluation of soft errors rate in a commercial memory EEPROM
International Nuclear Information System (INIS)
Soft errors are transient circuit errors caused by external radiation. When an ion intercepts a p-n region in an electronic component, the ionization produces excess charges along the track. These charges when collected can flip internal values, especially in memory cells. The problem affects not only space application but also terrestrial ones. Neutrons induced by cosmic rays and alpha particles, emitted from traces of radioactive contaminants contained in packaging and chip materials, are the predominant sources of radiation. The soft error susceptibility is different for different memory technology hence the experimental study are very important for Soft Error Rate (SER) evaluation. In this work, the methodology for accelerated tests is presented with the results for SER in a commercial electrically erasable and programmable read-only memory (EEPROM). (author)
Moretti, M.; Janssen, G.J.M.
2000-01-01
The transmission modulation system minimizes the wasted 'out of band' power. The digital data (1) to be transmitted is fed via a pulse response filter (2) to a mixer (4) where it modulates a carrier wave (4). The digital data is also fed via a delay circuit (5) and identical filter (6) to a second m
Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link
Berioli Matteo; Kissling Christian; Lapeyre Rémi
2007-01-01
The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, dura...
The 95% confidence intervals of error rates and discriminant coefficients
Directory of Open Access Journals (Sweden)
Shuichi Shinmura
2015-02-01
Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.
QVBR-MAC: A QoS-Oriented MAC Protocol for Variable-Bit-Rate MC-CDMA Wireless LANs
Berlanda-Scorza, Giovanni; Sacchi, Claudio; Granelli, Fabrizio; Natale, Francesco
2004-01-01
Multicarrier Code Division Multiple Access (MC-CDMA) techniques were originally proposed at mid of 90’s for wideband multi-user communications in wireless environments characterised by hostile propagation characteristics. Problems still to be solved are related to the provision of efficient resource channel allocation in variable-bit-rate transmission. In this work, the design of a MC-CDMA-based WLAN infrastructure is considered. The great advantage of MC-CDMA, i.e. the capability of supporti...
Ma, Jing; Li, Kangning; Tan, Liying; Yu, Siyuan; Cao, Yubin
2016-02-01
The error rate performances and outage probabilities of free-space optical (FSO) communications with spatial diversity are studied for Gamma-Gamma turbulent environments. Equal gain combining (EGC) and selection combining (SC) diversity are considered as practical schemes to mitigate turbulence. The exact bit-error rate (BER) expression and outage probability are derived for direct detection EGC multiple aperture receiver system. BER performances and outage probabilities are analyzed and compared for different number of sub-apertures each having aperture area A with EGC and SC techniques. BER performances and outage probabilities of a single monolithic aperture and multiple aperture receiver system with the same total aperture area are compared under thermal-noise-limited and background-noise-limited conditions. It is shown that multiple aperture receiver system can greatly improve the system communication performances. And these analytical tools are useful in providing highly accurate error rate estimation for FSO communication systems.
Neutron-induced soft error rate measurements in semiconductor memories
International Nuclear Information System (INIS)
Soft error rate (SER) testing of devices have been performed using the neutron beam at the Radiation Science and Engineering Center at Penn State University. The soft error susceptibility for different memory chips working at different technology nodes and operating voltages is determined. The effect of 10B on SER as an in situ excess charge source is observed. The effect of higher-energy neutrons on circuit operation will be published later. Penn State Breazeale Nuclear Reactor was used as the neutron source in the experiments. The high neutron flux allows for accelerated testing of the SER phenomenon. The experiments and analyses have been performed only on soft errors due to thermal neutrons. Various memory chips manufactured by different vendors were tested at various supply voltages and reactor power levels. The effect of 10B reaction caused by thermal neutron absorption on SER is discussed
Celandroni, Nedo; Ferro, Erina; Mihal, Vlado; Potort?, Francesco
1992-01-01
This report describes the FODA system working at variable coding and bit rates (FODA/IBEA-TDMA) FODA/IBEA is the natural evolution of the FODA-TDMA satellite access scheme working at 2 Mbit/s fixed rate with data 1/2 coded or uncoded. FODA-TDMA was used in the European SATINE-II experiment [8]. We remind here that the term FODA/IBEA system is comprehensive of the FODA/IBEA-TDMA (1) satellite access scheme and of the hardware prototype realised by the Marconi R.C. (U.K.). Both of them come fro...
Ruck, B.; Oelze, B.; Sodtke, E.
1997-12-01
We have designed and simulated a circuit for the experimental determination of the rate of dynamic switching errors in high-temperature superconductor RSFQ circuits. The proposal is that a series-connected pair of Josephson junctions is read out by SFQ pulses circulating in a ring-shaped Josephson transmission line at high frequency. Suitable bias currents determine the switching thresholds of the junction pair. By measuring the voltage across the transmission line, it is proposed that the occurrence of a switching error can be detected. The bit error rate can be determined from the mean time before false switching together with the SFQ circulation frequency. The circuit design allows measurements over a wide temperature range.
Energy Technology Data Exchange (ETDEWEB)
Ruck, B.; Oelze, B.; Sodtke, E. [Institute of Thin Film and Ion Technology, Forschungszentrum Juelich GmbH, Juelich (Germany)
1997-12-01
We have designed and simulated a circuit for the experimental determination of the rate of dynamic switching errors in high-temperature superconductor RSFQ circuits. The proposal is that a series-connected pair of Josephson junctions is read out by SFQ pulses circulating in a ring-shaped Josephson transmission line at high frequency. Suitable bias currents determine the switching thresholds of the junction pair. By measuring the voltage across the transmission line, it is proposed that the occurrence of a switching error can be detected. The bit error rate can be determined from the mean time before false switching together with the SFQ circulation frequency. The circuit design allows measurements over a wide temperature range. (author)
International Nuclear Information System (INIS)
We have designed and simulated a circuit for the experimental determination of the rate of dynamic switching errors in high-temperature superconductor RSFQ circuits. The proposal is that a series-connected pair of Josephson junctions is read out by SFQ pulses circulating in a ring-shaped Josephson transmission line at high frequency. Suitable bias currents determine the switching thresholds of the junction pair. By measuring the voltage across the transmission line, it is proposed that the occurrence of a switching error can be detected. The bit error rate can be determined from the mean time before false switching together with the SFQ circulation frequency. The circuit design allows measurements over a wide temperature range. (author)
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
Peter McInerney; Paul Adams; Hadi, Masood Z.
2014-01-01
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differe...
All-optical wavelength conversion at bit rates above 10 Gb/s using semiconductor optical amplifiers
DEFF Research Database (Denmark)
Jørgensen, Carsten; Danielsen, Søren Lykke; Stubkjær, Kristian; Schilling, M.; Daub, K.; Doussiere, P.; Pommerau, F.; Hansen, Peter Bukhave; Poulsen, Henrik Nørskov; Kloch, Allan; Vaa, Michael; Mikkelsen, Benny; Lach, E.; Laube, G.; Idler, W.; Wunstel, K.
1997-01-01
This work assesses the prospects for high-speed all-optical wavelength conversion using the simple optical interaction with the gain in semiconductor optical amplifiers (SOAs) via the interband carrier recombination. Operation and design guidelines for conversion speeds above 10 Gb/s are describe...... and the various tradeoffs are discussed. Experiments at bit rates up to 40 Gb/s are presented for both cross-gain modulation (XGM) and cross-phase modulation (XPM) in SOAs demonstrating the high-speed capability of these techniques...
Yilmaz, Ferkan
2012-07-01
Analysis of the average binary error probabilities (ABEP) and average capacity (AC) of wireless communications systems over generalized fading channels have been considered separately in past years. This paper introduces a novel moment generating function (MGF)-based unified expression for the ABEP and AC of single and multiple link communications with maximal ratio combining. In addition, this paper proposes the hyper-Fox\\'s H fading model as a unified fading distribution of a majority of the well-known generalized fading environments. As such, the authors offer a generic unified performance expression that can be easily calculated, and that is applicable to a wide variety of fading scenarios. The mathematical formulism is illustrated with some selected numerical examples that validate the correctness of the authors\\' newly derived results. © 1972-2012 IEEE.
Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access
Zafar, Ammar
2012-12-29
In this paper, we present an optimal resource allocation scheme (ORA) for an all-participate(AP) cognitive relay network that minimizes the symbol error rate (SER). The SER is derived and different constraints are considered on the system. We consider the cases of both individual and global power constraints, individual constraints only and global constraints only. Numerical results show that the ORA scheme outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).
Directory of Open Access Journals (Sweden)
Balakrishna Konda
2012-11-01
Full Text Available Traditional Serial-Serial multiplier addresses the high data sampling rate. It is effectively considered as the entire partial product matrix with n data sampling cycle for n×n multiplication function instead of 2n cycles in the conventional multipliers. This multiplication of partial products by considering two series inputs among which one is starting from LSB the other from MSB. Using this feed sequence and accumulation technique it takes only n cycle to complete the partial products. It achieves high bit sampling rate by replacing conventional full adder and highest 5:3 counters. Here asynchronous 1’s counter is presented. This counter takes critical path is limited to only an AND gate and D flip-flops. Accumulation is integral part of serial multiplier design. 1’s counter is used to count the number of ones at the end of the nth iteration in each counter produces. The implemented multipliers consist of a serial-serial data accumulator module and carry save adder that occupies less silicon area than the full carry save adder. In this paper we implemented model address for the 8bit 2’s complement implementing the Baugh-wooley algorithm and unsigned multiplication implementing the architecture for 8×8 Serial-Serial unsigned multiplication.
Roncin, Vincent; Gay, Mathilde; Bramerie, Laurent; Simon, Jean-Claude
2014-01-01
This paper presents a theoretical and experimental investigation of optical signal regeneration properties of a non-linear optical loop mirror using a semiconductor optical amplifier as the active element (SOA-NOLM). While this device has been extensively studied for optical time division demultiplexing (OTDM) and wavelength conversion applications, our proposed approach, based on a reflective configuration, has not yet been investigated, particularly in the light of signal regeneration. The impact on the transfer function shape of different parameters, like SOA position in the interferometer and SOA input optical powers, are numerically studied to appreciate the regenerative capabilities of the device.Regenerative performances in association with a dual stage of SOA to create a 3R regenerator which preserves the data polarity and the wavelength are experimentally assessed. Thanks to this complete regenerative function, a 100.000 km error free transmission has experimentally been achieved at 10 Gb/s in a reci...
Bit-padding information guided channel hopping
Yang, Yuli
2011-02-01
In the context of multiple-input multiple-output (MIMO) communications, we propose a bit-padding information guided channel hopping (BP-IGCH) scheme which breaks the limitation that the number of transmit antennas has to be a power of two based on the IGCH concept. The proposed scheme prescribes different bit-lengths to be mapped onto the indices of the transmit antennas and then uses padding technique to avoid error propagation. Numerical results and comparisons, on both the capacity and the bit error rate performances, are provided and show the advantage of the proposed scheme. The BP-IGCH scheme not only offers lower complexity to realize the design flexibility, but also achieves better performance. © 2011 IEEE.
Pulse shaping for all-optical signal processing of ultra-high bit rate serial data signals
DEFF Research Database (Denmark)
Palushani, Evarist
The following thesis concerns pulse shaping and optical waveform manipulation for all-optical signal processing of ultra-high bit rate serial data signals, including generation of optical pulses in the femtosecond regime, serial-to-parallel conversion and terabaud coherent optical time division...... multiplexing (OTDM). Most of the thesis is focused on the utilization of spacetime dualities for temporal pulse shaping and Fourier transformation. The space-time duality led to the implementation of the optical Fourier transform (OFT) technique which was used as a crossing bridge between the temporal and...... spectral domain. By using the frequency-totime OFT technique or optical temporal differentiators based on long-period gratings (LPGs), it was possible to generate narrow at-top pulses in the picosecond regime, and use them for mitigation of timing jitter or polarization dependence effects in OTDM...
Energy Technology Data Exchange (ETDEWEB)
TerraTek
2007-06-30
A deep drilling research program titled 'An Industry/DOE Program to Develop and Benchmark Advanced Diamond Product Drill Bits and HP/HT Drilling Fluids to Significantly Improve Rates of Penetration' was conducted at TerraTek's Drilling and Completions Laboratory. Drilling tests were run to simulate deep drilling by using high bore pressures and high confining and overburden stresses. The purpose of this testing was to gain insight into practices that would improve rates of penetration and mechanical specific energy while drilling under high pressure conditions. Thirty-seven test series were run utilizing a variety of drilling parameters which allowed analysis of the performance of drill bits and drilling fluids. Five different drill bit types or styles were tested: four-bladed polycrystalline diamond compact (PDC), 7-bladed PDC in regular and long profile, roller-cone, and impregnated. There were three different rock types used to simulate deep formations: Mancos shale, Carthage marble, and Crab Orchard sandstone. The testing also analyzed various drilling fluids and the extent to which they improved drilling. The PDC drill bits provided the best performance overall. The impregnated and tungsten carbide insert roller-cone drill bits performed poorly under the conditions chosen. The cesium formate drilling fluid outperformed all other drilling muds when drilling in the Carthage marble and Mancos shale with PDC drill bits. The oil base drilling fluid with manganese tetroxide weighting material provided the best performance when drilling the Crab Orchard sandstone.
Radiation effects of a 12-bit bipolar digital-to-analog converter under different dose rates
International Nuclear Information System (INIS)
Total-dose effects and room-temperature annealing behavior of bipolar digital-to-analog converter (DAC) irradiated by 60Co γ-rays were investigated. The results show that the response of the DAC is different between low- and high-dose-rate irradiation. It was found that the integrated circuits exhibit ELDRS and time dependence effect as well. Based on the space charge model, possible mechanism for this response is discussed. (authors)
International Nuclear Information System (INIS)
We have performed highly accurate numerical calculations of high bit rate impulse propagation through the seven digital communication channels of the atmosphere at RH 58% (10 g m−3). These calculations maximized bit rates for pathlengths equal to or longer than 100 m. We have experimentally verified our calculations for three channels with a propagation pathlength of 137 m and RH 65% (11.2 g m−3). Excellent agreement between measurement and theory was obtained for Channel 3 at 252 GHz, bit rate 84 Gb s−1, FWHM bandwidth (BW) 180 GHz; Channel 6 at 672 GHz, 45 Gb s−1, BW 84 GHz; and Channel 7 at 852 GHz, 56.8 Gb s−1, BW 108 GHz. (special issue article)
Forensic watermarking and bit-rate conversion of partially encrypted AAC bitstreams
Lemma, Aweke; Katzenbeisser, Stefan; Celik, Mehmet U.; Kirbiz, S.
2008-02-01
Electronic Music Distribution (EMD) is undergoing two fundamental shifts. The delivery over wired broadband networks to personal computers is being replaced by delivery over heterogeneous wired and wireless networks, e.g. 3G and Wi-Fi, to a range of devices such as mobile phones, game consoles and in-car players. Moreover, restrictive DRM models bound to a limited set of devices are being replaced by flexible standards-based DRM schemes and increasingly forensic tracking technologies based on watermarking. Success of these EMD services will partially depend on scalable, low-complexity and bandwidth eficient content protection systems. In this context, we propose a new partial encryption scheme for Advanced Audio Coding (AAC) compressed audio which is particularly suitable for emerging EMD applications. The scheme encrypts only the scale-factor information in the AAC bitstream with an additive one-time-pad. This allows intermediate network nodes to transcode the bitstream to lower data rates without accessing the decryption keys, by increasing the scale-factor values and re-quantizing the corresponding spectral coeficients. Furthermore, the decryption key for each user is customized such that the decryption process imprints the audio with a unique forensic tracking watermark. This constitutes a secure, low-complexity watermark embedding process at the destination node, i.e. the player. As opposed to server-side embedding methods, the proposed scheme lowers the computational burden on servers and allows for network level bandwidth saving measures such as multi-casting and caching.
Measuring of Block Error Rates in High-Speed Digital Networks
Directory of Open Access Journals (Sweden)
Petr Ivaniga
2006-01-01
Full Text Available Error characteristics is a decisive factor for the digital networks transmission quality definition. The ITU – TG.826 and G.828 recommendations identify error parameters for high – speed digital networks in relation to G.821 recommendation. The paper describes the relations between individual error parameters and the error rate assuming that theseare invariant in terms of time.
Inter-bit prediction based on maximum likelihood estimate for distributed video coding
Klepko, Robert; Wang, Demin; Huchet, Grégory
2010-01-01
Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation
Swift, G.
2002-01-01
JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.
International Nuclear Information System (INIS)
Data indicates that about one half of all errors are skill based. Yet, most of the emphasis is focused on correcting rule and knowledge based errors leading to more programs, supervision, and training. None of this corrective action applies to the 'mental lapse' error. Skill based errors are usually committed in performing a routine and familiar task. Workers went to the wrong unit or component, or wrong something. Too often some of these errors result in reactor scrams, turbine trips, or other unwanted actuation. The workers do not need more programs, supervision, or training. They need to know when they are vulnerable and they need to know how to think. Self check can prevent errors, but only if it is practiced intellectually, and with commitment. Skill based errors are usually the result of using habits and senses instead of using our intellect. Even human factors can play a role in the cause of an error on a routine task. Personal injury also, is usually an error. Sometimes they are called accidents, but most accidents are the result of inappropriate actions. Whether we can explain it or not, cause and effect were there. A proper attitude toward risk, and a proper attitude toward danger is requisite to avoiding injury. Many personal injuries can be avoided just by attitude. Errors, based on personal experience and interviews, examines the reasons for the 'mental lapse' errors, and why some of us become injured. The paper offers corrective action without more programs, supervision, and training. It does ask you to think differently. (author)
A FAST BIT-LOADING ALGORITHM FOR HIGH SPEED POWER LINE COMMUNICATIONS
Institute of Scientific and Technical Information of China (English)
Zhang Shengqing; Zhao Li; Zou Cairong
2012-01-01
Adaptive bit-loading is a key technology in high speed power line communications with the Orthogonal Frequency Division Multiplexing (OFDM) modulation technology.According to the real situation of the transmitting power spectrum limited in high speed power line communications,this paper explored the adaptive bit loading algorithm to maximize transmission bit number when transmitting power spectral density and bit error rate are not exceed upper limit.With the characteristics of the power line channel,first of all,it obtains the optimal bit loading algorithm,and then provides the improved algorithm to reduce the computational complexity.Based on the analysis and simulation,it offers a non-iterative bit allocation algorithm,and finally the simulation shows that this new algorithm can greatly reduce the computational complexity,and the actual bit allocation results close to optimal.
Roy, Urmimala; Register, Leonard F; Banerjee, Sanjay K
2016-01-01
Spin-transfer-torque random access memory (STT-RAM) is a promising candidate for the next-generation of random-access-memory due to improved scalability, read-write speeds and endurance. However, the write pulse duration must be long enough to ensure a low write error rate (WER), the probability that a bit will remain unswitched after the write pulse is turned off, in the presence of stochastic thermal effects. WERs on the scale of 10$^{-9}$ or lower are desired. Within a macrospin approximation, WERs can be calculated analytically using the Fokker-Planck method to this point and beyond. However, dynamic micromagnetic effects within the bit can affect and lead to faster switching. Such micromagnetic effects can be addressed via numerical solution of the stochastic Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation. However, determining WERs approaching 10$^{-9}$ would require well over 10$^{9}$ such independent simulations, which is infeasible. In this work, we explore calculation of WER using "rare event en...
Error Rates in Users of Automatic Face Recognition Software.
Directory of Open Access Journals (Sweden)
David White
Full Text Available In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated 'candidate lists' selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers-who use the system in their daily work-and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced "facial examiners" outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems-potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.
Mohammed, Usama S
2010-01-01
This paper proposes new scheme for efficient rate allocation in conjunction with reducing peak-to-average power ratio (PAPR) in orthogonal frequency-division multiplexing (OFDM) systems. Modification of the set partitioning in hierarchical trees (SPIHT) image coder is proposed to generate four different groups of bit-stream relative to its significances. The significant bits, the sign bits, the set bits and the refinement bits are transmitted in four different groups. The proposed method for reducing the PAPR utilizes twice the unequal error protection (UEP), using the Read-Solomon codes (RS), in conjunction with bit-rate allocation and selective interleaving to provide minimum PAPR. The output bit-stream from the source code (SPIHT) will be started by the most significant types of bits (first group of bits). The optimal unequal error protection (UEP) of the four groups is proposed based on the channel destortion. The proposed structure provides significant improvement in bit error rate (BER) performance. Per...
Takahashi, Koji; Matsui, Hideki; Nagashima, Tomotaka; Konishi, Tsuyoshi
2013-11-15
We demonstrate a resolution upgrade toward 6 bit optical quantization using a power-to-wavelength conversion without an increment of system parallelism. Expansion of a full-scale input range is employed in conjunction with reduction of a quantization step size with keeping a sampling-rate transparency characteristic over several 100 sGS/s. The effective number of bits is estimated to 5.74 bit, and the integral nonlinearity error and differential nonlinearity error are estimated to less than 1 least significant bit. PMID:24322152
Error Rate of the Kane Quantum Computer CNOT Gate in the Presence of Dephasing
Fowler, Austin G.; Wellard, Cameron J.; Hollenberg, Lloyd C. L.
2002-01-01
We study the error rate of CNOT operations in the Kane solid state quantum computer architecture. A spin Hamiltonian is used to describe the system. Dephasing is included as exponential decay of the off diagonal elements of the system's density matrix. Using available spin echo decay data, the CNOT error rate is estimated at approsimately 10^{-3}.
Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt
2015-12-01
Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data. PMID:25540125
Steven D. Levitt
1995-01-01
A strong, negative empirical correlation exists between arrest rates and reported crime rates. While this relationship has often been interpreted as support for the deterrence hypothesis, it is equally consistent with incapacitation effects, and/or a spurious correlation that would be induced by measurement error in reported crime rates. This paper attempts to discriminate between deterrence, incapacitation, and measurement error as explanations for the empirical relationship between arrest r...
Error Resilient Video Compression Using Behavior Models
Directory of Open Access Journals (Sweden)
Jacco R. Taal
2004-03-01
Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
AN APPLICATION OF LINEAR ERROR-BLOCK CODES IN STEGANOGRAPHY
Directory of Open Access Journals (Sweden)
Rabi DARITI
2011-01-01
Full Text Available We use Linear error-block codes (LEBC to design a new method of gray-scale image steganography. We exploit the fact that in an image there are bits that better hide distortion than others (not necessarily least signi_cant bits. Our method uses the cover bits as extensively as their ability to hide distortion. The results show that with a good choice of parameters, the change rate can also be smaller.
Theoretical Limits on Errors and Acquisition Rates in Localizing Switchable Fluorophores
Small, Alexander R
2008-01-01
A variety of recent imaging techniques are able to beat the diffraction limit in fluorescence microcopy by activating and localizing subsets of the fluorescent molecules in the specimen, and repeating this process until all of the molecules have been imaged. In these techniques there is a tradeoff between speed (activating more molecules per imaging cycle) and error rates (activating more molecules risks producing overlapping images that hide information on molecular positions), and so intelligent image-processing approaches are needed to identify and reject overlapping images. We introduce here a formalism for defining error rates, derive a general relationship between error rates, image acquisition rates, and the performance characteristics of the image processing algorithms, and show that there is a minimum acquisition time irrespective of algorithm performance. We also consider algorithms that can infer molecular positions from images of overlapping blurs, and derive the dependence of the minimimum acquis...
Zollanvari, Amin
2013-05-24
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Switching field distribution of exchange coupled ferri-/ferromagnetic composite bit patterned media
Oezelt, Harald; Fischbacher, Johann; Matthes, Patrick; Kirk, Eugenie; Wohlhüter, Phillip; Heyderman, Laura Jane; Albrecht, Manfred; Schrefl, Thomas
2016-01-01
We investigate the switching field distribution and the resulting bit error rate of exchange coupled ferri-/ferromagnetic bilayer island arrays by micromagnetic simulations. Using islands with varying microstructure and anisotropic properties, the intrinsic switching field distribution is computed. The dipolar contribution to the switching field distribution is obtained separately by using a model of a hexagonal island array resembling $1.4\\,\\mathrm{Tb/in}^2$ bit patterned media. Both contributions are computed for different thickness of the soft exchange coupled ferrimagnet and also for ferromagnetic single phase FePt islands. A bit patterned media with a bilayer structure of FeGd($5\\,\\mathrm{nm}$)/FePt($5\\,\\mathrm{nm}$) shows a bit error rate of $10^{-4}$ with a write field of $1.2\\,\\mathrm{T}$.
Bits of String and Bits of Branes
Bergman, Oren
1996-01-01
String-bit models are both an efficient way of organizing string perturbation theory, and a possible non-perturbative composite description of string theory. This is a summary of ideas and results of string-bit and superstring-bit models, as presented in the Strings '96 conference.
Estimating the annotation error rate of curated GO database sequence annotations
Directory of Open Access Journals (Sweden)
Brown Alfred L
2007-05-01
Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.
Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors
Energy Technology Data Exchange (ETDEWEB)
Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)
2011-02-15
Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa
Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors
International Nuclear Information System (INIS)
Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26914402
The Effect of Government Size on the Steady-State Unemployment Rate: An Error Correction Model
Burton A. Abrams; Siyan Wang
2007-01-01
The relationship between government size and the unemployment rate is investigated using an error-correction model that describes both the short-run dynamics and long-run determination of the unemployment rate. Using data from twenty OECD countries from 1970 to 1999 and after correcting for simultaneity bias, we find that government size, measured as total government outlays as a percentage of GDP, plays a significant role in affecting the steady-state unemployment rate. Importantly, when gov...
A novel multitemporal insar model for joint estimation of deformation rates and orbital errors
Zhang, Lei
2014-06-01
Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.
Voice recognition versus transcriptionist: error rated and productivity in MRI reporting
International Nuclear Information System (INIS)
Full text: Purpose: Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Methods: Fifty MRI reports generated by VR and 50 finalised MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Results: Forty-two % and 30% of the finalised VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Conclusion: Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR.
Directory of Open Access Journals (Sweden)
Berhane Yemane
2008-03-01
Full Text Available Abstract Background As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs. Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. Methods This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. Results The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. Conclusion The low sensitivity of parameter
Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies
Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.
2010-01-01
We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.
Error-rate prediction for programmable circuits: methodology, tools and studied cases
Velazco, Raoul
2013-05-01
This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).
Reilly, James L.; Frankovich, Kyle; Hill, Scot; Gershon, Elliot S.; Keefe, Richard S.E.; Keshavan, Matcheri S.; Pearlson, Godfrey D.; Tamminga, Carol A.; John A. Sweeney
2013-01-01
Background: Elevated antisaccade error rate, reflecting problems with inhibitory behavioral control, is a promising intermediate phenotype for schizophrenia. Here, we consider whether it marks liability across psychotic disorders via common or different neurophysiological mechanisms and whether it represents a neurocognitive risk indicator apart from the generalized cognitive deficit. Methods: Schizophrenia (n = 267), schizoaffective (n = 150), and psychotic bipolar (n = 202) probands, their ...
Quantifying the Impact of Single Bit Flips on Floating Point Arithmetic
Energy Technology Data Exchange (ETDEWEB)
Elliott, James J [ORNL; Mueller, Frank [North Carolina State University; Stoyanov, Miroslav K [ORNL; Webster, Clayton G [ORNL
2013-08-01
In high-end computing, the collective surface area, smaller fabrication sizes, and increasing density of components have led to an increase in the number of observed bit flips. If mechanisms are not in place to detect them, such flips produce silent errors, i.e. the code returns a result that deviates from the desired solution by more than the allowed tolerance and the discrepancy cannot be distinguished from the standard numerical error associated with the algorithm. These phenomena are believed to occur more frequently in DRAM, but logic gates, arithmetic units, and other circuits are also susceptible to bit flips. Previous work has focused on algorithmic techniques for detecting and correcting bit flips in specific data structures, however, they suffer from lack of generality and often times cannot be implemented in heterogeneous computing environment. Our work takes a novel approach to this problem. We focus on quantifying the impact of a single bit flip on specific floating-point operations. We analyze the error induced by flipping specific bits in the most widely used IEEE floating-point representation in an architecture-agnostic manner, i.e., without requiring proprietary information such as bit flip rates and the vendor-specific circuit designs. We initially study dot products of vectors and demonstrate that not all bit flips create a large error and, more importantly, expected value of the relative magnitude of the error is very sensitive on the bit pattern of the binary representation of the exponent, which strongly depends on scaling. Our results are derived analytically and then verified experimentally with Monte Carlo sampling of random vectors. Furthermore, we consider the natural resilience properties of solvers based on the fixed point iteration and we demonstrate how the resilience of the Jacobi method for linear equations can be significantly improved by rescaling the associated matrix.
Bock, Douglas G.; And Others
1984-01-01
This study (1) demonstrates the negative impact of profanity in a public speech and (2) sheds light on the conceptualization of the term "rating error." Implications for classroom teaching are discussed. (PD)
DRILL BITS FOR HORIZONTAL WELLS
Directory of Open Access Journals (Sweden)
Paolo Macini
1996-12-01
Full Text Available This paper underlines the importance of the correct drill bit application in horizontal wells. Afler the analysis of the peculiarities of horizontal wells and drainholes drilling techniques, advantages and disadvantages of the application of both roller cone and fixed cutters drill bits have been discussed. Also, a review of the potential specific featuries useful for a correct drill bit selection in horizontal small diameter holes has been highlighted. Drill bits for these special applications, whose importance is quickly increasing nowadays, should be characterised by a design capable to deliver a good penetration rate low WOB, and, at the same time, be able to withstand high RPM without premature cutting structure failure and undergauge. Formation properties will also determine the cutting structure type and the eventual specific features for additional gauge and shoulder protection.
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533
International Nuclear Information System (INIS)
SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach. (semiconductor integrated circuits)
Zhongming, Wang; Zhibin, Yao; Hongxia, Guo; Min, Lu
2011-05-01
SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach.
Chen, Jian; Dutton, Zachary; Lazarus, Richard; Guha, Saikat
2011-01-01
The quantum states of two laser pulses---coherent states---are never mutually orthogonal, making perfect discrimination impossible. Even so, coherent states can achieve the ultimate quantum limit for capacity of a classical channel, the Holevo capacity. Attaining this requires the receiver to make joint-detection measurements on long codeword blocks, optical implementations of which remain unknown. We report the first experimental demonstration of a joint-detection receiver, demodulating quaternary pulse-position-modulation (PPM) codewords at a word error rate of up to 40% (2.2 dB) below that attained with direct-detection, the largest error-rate improvement over the standard quantum limit reported to date. This is accomplished with a conditional nulling receiver, which uses optimized-amplitude coherent pulse nulling, single photon detection and quantum feedforward. We further show how this translates into coding complexity improvements for practical PPM systems, such as in deep-space communication. We antici...
Smadi, Mahmoud A.
2012-12-06
In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.
Analysis of simultaneous multi-bit induced by a cosmic ray for onboard memory
International Nuclear Information System (INIS)
Accompanying the development of intelligent onboard equipment using high density memories, the soft-error phenomenon, which is the bit upset induced by a cosmic ray, must be investigated. Especially, the simultaneous multi-bit error (SME) induced by a cosmic ray negligible on earth becomes remarkable in space use. This paper entimates the SME occurrence rate of memory chip by computer simulations and describes the results of the SME experiments using a cyclotron. The computer simulation and experiment results confirm the SME occurrence and show that layout of memory cells is important for the probability of SME occurrence. (author)
Schreiber, Jacob; Wescoe, Zachary L; Abu-Shumays, Robin; Vivian, John T; Baatar, Baldandorj; Karplus, Kevin; Akeson, Mark
2013-11-19
Cytosine, 5-methylcytosine, and 5-hydroxymethylcytosine were identified during translocation of single DNA template strands through a modified Mycobacterium smegmatis porin A (M2MspA) nanopore under control of phi29 DNA polymerase. This identification was based on three consecutive ionic current states that correspond to passage of modified or unmodified CG dinucleotides and their immediate neighbors through the nanopore limiting aperture. To establish quality scores for these calls, we examined ~3,300 translocation events for 48 distinct DNA constructs. Each experiment analyzed a mixture of cytosine-, 5-methylcytosine-, and 5-hydroxymethylcytosine-bearing DNA strands that contained a marker that independently established the correct cytosine methylation status at the target CG of each molecule tested. To calculate error rates for these calls, we established decision boundaries using a variety of machine-learning methods. These error rates depended upon the identity of the bases immediately 5' and 3' of the targeted CG dinucleotide, and ranged from 1.7% to 12.2% for a single-pass read. We estimate that Q40 values (0.01% error rates) for methylation status calls could be achieved by reading single molecules 5-19 times depending upon sequence context. PMID:24167260
Curve fitting and error modeling for the digitization process near the Nyquist rate
Energy Technology Data Exchange (ETDEWEB)
Baumgart, C.W.; Dunham, M.E. (EG and G Energy Measurements, Inc., Las Vegas, NV (United States)); Moses, J.D. (Los Alamos National Lab., NM (United States))
1992-03-01
The Nyquist and Shannon theorems originated the concept of sampling a band-limited signal at a minimum rate. By sampling at this minimum rate, enough information is gathered to allow an accurate reconstruction of the original analog signal. These theorems were derived for time-quantized signals and did not include simultaneous amplitude quantization. In addition, the underlying assumptions on which these theorems were based are violated in typical use. Therefore, actual practice in data acquisition has been to oversample signal bandwidth by two to three times to conserve accuracy. We report a new numerical investigation of digitization process accuracy,with respect to sample rate, sample amplitude resolution, and record length. This investigation is based on the use of a computer algorithm that reconstructs original analog test signals from their ideally digitized representations. A Monte Carlo technique is used to simulate simultaneous time and amplitude quantization of the test signals, followed by an optimal least-squares curvefit routine which reconstructs the input signal from the digitized data.In this way, we examine the error sensitivity to the digitization process in each reconstructed signal parameter. We find that although no specific Nyquist limit exists for a known wave shape, the parameter errors vary continuously with respect to the aforementioned variables, and critical sample densities of two to four sample periods per risetime are seen. Plots of curve-fitted parameter error versus fundamental digitization variables are useful in specifying experimental tasks and indicate further directions for reconstruction algorithm development. 12 refs.
Reducing error rates in straintronic multiferroic nanomagnetic logic by pulse shaping
Munira, Kamaram; Xie, Yunkun; Nadri, Souheil; Forgues, Mark B.; Salehi Fashami, Mohammad; Atulasimha, Jayasimha; Bandyopadhyay, Supriyo; Ghosh, Avik W.
2015-06-01
Dipole-coupled nanomagnetic logic (NML), where nanomagnets (NMs) with bistable magnetization states act as binary switches and information is transferred between them via dipole-coupling and Bennett clocking, is a potential replacement for conventional transistor logic since magnets dissipate less energy than transistors when they switch in a logic circuit. Magnets are also ‘non-volatile’ and hence can store the results of a computation after the computation is over, thereby doubling as both logic and memory—a feat that transistors cannot achieve. However, dipole-coupled NML is much more error-prone than transistor logic at room temperature (\\gt 1%) because thermal noise can easily disrupt magnetization dynamics. Here, we study a particularly energy-efficient version of dipole-coupled NML known as straintronic multiferroic logic (SML) where magnets are clocked/switched with electrically generated mechanical strain. By appropriately ‘shaping’ the voltage pulse that generates strain, we show that the error rate in SML can be reduced to tolerable limits. We describe the error probabilities associated with various stress pulse shapes and discuss the trade-off between error rate and switching speed in SML.The lowest error probability is obtained when a ‘shaped’ high voltage pulse is applied to strain the output NM followed by a low voltage pulse. The high voltage pulse quickly rotates the output magnet’s magnetization by 90° and aligns it roughly along the minor (or hard) axis of the NM. Next, the low voltage pulse produces the critical strain to overcome the shape anisotropy energy barrier in the NM and produce a monostable potential energy profile in the presence of dipole coupling from the neighboring NM. The magnetization of the output NM then migrates to the global energy minimum in this monostable profile and completes a 180° rotation (magnetization flip) with high likelihood.
Rodriguez, Pilar; Maestre, Zuriñe; Martinez-Madrid, Maite; Reynoldson, Trefor B
2011-01-17
Sediments from 71 river sites in Northern Spain were tested using the oligochaete Tubifex tubifex (Annelida, Clitellata) chronic bioassay. 47 sediments were identified as reference primarily from macroinvertebrate community characteristics. The data for the toxicological endpoints were examined using non-metric MDS. Probability ellipses were constructed around the reference sites in multidimensional space to establish a classification for assessing test-sediments into one of three categories (Non Toxic, Potentially Toxic, and Toxic). The construction of such probability ellipses sets the Type I error rate. However, we also wished to include in the decision process for identifying pass-fail boundaries the degree of disturbance required to be detected, and the likelihood of being wrong in detecting that disturbance (i.e. the Type II error). Setting the ellipse size to use based on Type I error does not include any consideration of the probability of Type II error. To do this, the toxicological response observed in the reference sediments was manipulated by simulating different degrees of disturbance (simpacted sediments), and measuring the Type II error rate for each set of the simpacted sediments. From this procedure, the frequency at each probability ellipse of identifying impairment using sediments with known level of disturbance is quantified. Thirteen levels of disturbance and seven probability ellipses were tested. Based on the results the decision boundary for Non Toxic and Potentially Toxic was set at the 80% probability ellipse, and the boundary for Potentially Toxic and Toxic at the 95% probability ellipse. Using this approach, 9 test sediments were classified as Toxic, 2 as Potentially Toxic, and 13 as Non Toxic. PMID:20980065
Error resilient H.264/AVC Video over Satellite for low Packet Loss Rates
DEFF Research Database (Denmark)
Aghito, Shankar Manuel; Forchhammer, Søren; Andersen, Jakob Dahl
2007-01-01
The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The new...... Inmarsat BGAN system at 256 kbit/s is used as test case. This systems operates at low loss rates guaranteeing a packet loss rate of not more than 10~3. For high-end applications as 'reporter-in-the-field' live broadcast, it is crucial to obtain high quality without increasing delay....
Jeffrey H. Bergstrand; Egger, Peter
2011-01-01
Bilateral investment treaties (BITs) have proliferated over the past 50 years such that the number of pairs of countries with BITs is roughly as large as the number of country-pairs that belong to bilateral or regional preferential trade agreements (PTAs). The purpose of this study is to provide the first systematic empirical analysis of the economic determinants of BITs and of the likelihood of BITs between pairs of countries using a qualitative choice model, and in a manner consistent with ...
Error rates and improved algorithms for rare event simulation with heavy Weibull tails
DEFF Research Database (Denmark)
Asmussen, Søren; Kortschak, Dominik
Let Y1,…,Yn be i.i.d. subexponential and Sn=Y1+⋯+Yn. Asmussen and Kroese (2006) suggested a simulation estimator for evaluating P(Sn>x), combining an exchangeability argument with conditional Monte Carlo. The estimator was later shown by Hartinger & Kortschak (2009) to have vanishing relative err....... For the Weibull and related cases, we calculate the exact error rate and suggest improved estimators. These improvements can be seen as control variate estimators, but are rather motivated by second order subexponential theory which is also at the core of the technical proofs.......Let Y1,…,Yn be i.i.d. subexponential and Sn=Y1+⋯+Yn. Asmussen and Kroese (2006) suggested a simulation estimator for evaluating P(Sn>x), combining an exchangeability argument with conditional Monte Carlo. The estimator was later shown by Hartinger & Kortschak (2009) to have vanishing relative error...
Performance monitoring following total sleep deprivation: effects of task type and error rate.
Renn, Ryan P; Cote, Kimberly A
2013-04-01
There is a need to understand the neural basis of performance deficits that result from sleep deprivation. Performance monitoring tasks generate response-locked event-related potentials (ERPs), generated from the anterior cingulate cortex (ACC) located in the medial surface of the frontal lobe that reflect error processing. The outcome of previous research on performance monitoring during sleepiness has been mixed. The purpose of this study was to evaluate performance monitoring in a controlled study of experimental sleep deprivation using a traditional Flanker task, and to broaden this examination using a response inhibition task. Forty-nine young adults (24 male) were randomly assigned to a total sleep deprivation or rested control group. The sleep deprivation group was slower on the Flanker task and less accurate on a Go/NoGo task compared to controls. General attentional impairments were evident in stimulus-locked ERPs for the sleep deprived group: P300 was delayed on Flanker trials and smaller to Go-stimuli. Further, N2 was smaller to NoGo stimuli, and the response-locked ERN was smaller on both tasks, reflecting neurocognitive impairment during performance monitoring. In the Flanker task, higher error rate was associated with smaller ERN amplitudes for both groups. Examination of ERN amplitude over time showed that it attenuated in the rested control group as error rate increased, but such habituation was not apparent in the sleep deprived group. Poor performing sleep deprived individuals had a larger Pe response than controls, possibly indicating perseveration of errors. These data provide insight into the neural underpinnings of performance failure during sleepiness and have implications for workplace and driving safety. PMID:23384887
Habteab Ghebretinsae, Aklilu; Molenberghs, Geert; Dmitrienko, Alex; Offen, Walt; Sethuraman, Gopalan
2014-01-01
In clinical trials, there always is the possibility to use data-driven adaptation at the end of a study. There prevails, however, concern on whether the type I error rate of the trial could be inflated with such design, thus, necessitating multiplicity adjustment. In this project, a simulation experiment was set up to assess type I error rate inflation associated with switching dose group as a function of dropout rate at the end of the study, where the primary analysis is in terms of a longitudinal outcome. This simulation is inspired by a clinical trial in Alzheimer's disease. The type I error rate was assessed under a number of scenarios, in terms of differing correlations between efficacy and tolerance, different missingness mechanisms, and different probabilities of switching. A collection of parameter values was used to assess sensitivity of the analysis. Results from ignorable likelihood analysis show that the type I error rate with and without switching was approximately the posited error rate for the various scenarios. Under last observation carried forward (LOCF), the type I error rate blew up both with and without switching. The type I error inflation is clearly connected to the criterion used for switching. While in general switching, in a way related to the primary endpoint, may impact the type I error, this was not the case for most scenarios in the longitudinal Alzheimer trial setting under consideration, where patients are expected to worsen over time. PMID:24697817
Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise
DEFF Research Database (Denmark)
Christensen, Lars P.B.
2005-01-01
Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...... can be achieved by a Multiple-Input Multiple- Output(MIMO) whitening filter followed by a traditional BCJR algorithm. The Gauss-Markov noise model provides a reasonable approximation for co-channel interference, making it an interesting single-user detector for many multiuser communication systems...
Error rate performance of FH/DPSK system in EMP environments
International Nuclear Information System (INIS)
In this paper, the effect of nuclear EMP interference on FH/DPSK system performance has been analyzed. EMP-induced interferer at recevier is modeled as an exponential damped sinusoidal wave in time. The error rate equation of received FH/DPSK signal has been derived and evaluated in terms of M(ary number) , CIR(carrier power to initial interference power ratio), and α(damping factor). The numerical results are given in graphs to discuss the EMP-induced interference effect on the FH/DPSK system performance. (Author)
Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise
Souri, Hamza
2015-06-01
The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.
Forward error correction based on algebraic-geometric theory
A Alzubi, Jafar; M Chen, Thomas
2014-01-01
This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.
The prevalence rates of refractive errors among children, adolescents, and adults in Germany
Directory of Open Access Journals (Sweden)
Sandra Jobke
2008-10-01
Full Text Available Sandra Jobke1, Erich Kasten2, Christian Vorwerk31Institute of Medical Psychology, 3Department of Ophthalmology, Otto-von Guericke-University of Magdeburg, Magdeburg, Germany; 2Institute of Medical Psychology, University Hospital Schleswig-Holstein, Luebeck, GermanyPurpose: The prevalence rates of myopia vary between 5% in Australian Aborigines to 84% in Hong Kong and Taiwan, 30% in Norwegian adults, and 49.5% in Swedish schoolchildren. The aim of this study was to determine the prevalence of refractive errors in German children, adolescents, and adults.Methods: The parents (aged 24–65 years and their children (516 subjects aged 2–35 years were asked to fill out a questionnaire about their refractive error and spectacle use. Emmetropia was defined as refractive status between +0.25D and –0.25D. Myopia was characterized as ≤−0.5D and hyperopia as ≥+0.5D. All information concerning refractive error were controlled by asking their opticians.Results: The prevalence rates of myopia differed significantly between all investigated age groups: it was 0% in children aged 2–6 years, 5.5% in children aged 7–11 years, 21.0% in adolescents (aged 12–17 years and 41.3% in adults aged 18–35 years (Pearson’s Chi-square, p = 0.000. Furthermore, 9.8% of children aged 2–6 years were hyperopic, 6.4% of children aged 7–11 years, 3.7% of adolescents, and 2.9% of adults (p = 0.380. The prevalence of myopia in females (23.6% was significantly higher than in males (14.6%, p = 0.018. The difference between the self-reported and the refractive error reported by their opticians was very small and was not significant (p = 0.850.Conclusion: In Germany, the prevalence of myopia seems to be somewhat lower than in Asia and Europe. There are few comparable studies concerning the prevalence rates of hyperopia.Keywords: Germany, hyperopia, incidence, myopia, prevalence
The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.
Fadaee, Shannon B; Migliaccio, Americo A
2016-04-01
The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation. PMID:26715411
Thermal neutron induced soft error rate measurement in semiconductor memories and circuits
International Nuclear Information System (INIS)
Soft error rate (SER) testing and measurements of semiconductor circuits with different operating voltages and operating conditions have been performed using the thermal neutron beam at the Radiation Science and Engineering Center (RSEC) at Penn State University. The high neutron flux allows for accelerated testing for SER by increasing reaction rate densities inside the tested device that gives more precision in the experimental data with lower experimental run time. The effect of different operating voltages and operating conditions on INTEL PXA270 processor has been experimentally determined. Experimental results showed that the main failure mechanism was the segmentation faults in the system. Failure response of the system to the operating conditions was in agreement with the general behavior of SERs. (author)
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Partly linear regression model is useful in practice, but littleis investigated in the literature to adapt it to the real data which are dependent and conditionally heteroscedastic. In this paper, the estimators of the regression components are constructed via local polynomial fitting and the large sample properties are explored. Under certain mild regularities, the conditions are obtained to ensure that the estimators of the nonparametric component and its derivatives are consistent up to the convergence rates which are optimal in the i.i.d. case, and the estimator of the parametric component is root-n consistent with the same rate as for parametric model. The technique adopted in the proof differs from that used and corrects the errors in the reference by Hamilton and Truong under i.i.d. samples.
Modeling the cosmic-ray-induced soft-error rate in integrated circuits: An overview
International Nuclear Information System (INIS)
This paper is an overview of the concepts and methodologies used to predict soft-error rates (SER) due to cosmic and high-energy particle radiation in integrated circuit chips. The paper emphasizes the need for the SER simulation using the actual chip circuit model which includes device, process, and technology parameters as opposed to using either the discrete device simulation or generic circuit simulation that is commonly employed in SER modeling. Concepts such as funneling, event-by-event simulation, nuclear history files, critical charge, and charge sharing are examined. Also discussed are the relative importance of elastic and inelastic nuclear collisions, rare event statistics, and device vs. circuit simulations. The semi-empirical methodologies used in the aerospace community to arrive at SERs [also referred to as single-event upset (SEU) rates] in integrated circuit chips are reviewed. This paper is one of four in this special issue relating to SER modeling. Together, they provide a comprehensive account of this modeling effort, which has resulted in a unique modeling tool called the Soft-Error Monte Carlo Model, or SEMM
Modified Golden Codes for Improved Error Rates Through Low Complex Sphere Decoder
Directory of Open Access Journals (Sweden)
K.Thilagam
2013-05-01
Full Text Available n recent years, the golden codes have proven to ex hibit a superior performance in a wireless MIMO (Multiple Input Multiple Output scenario than any other code. However, a serious limitation associated with it is its increased deco ding complexity. This paper attempts to resolve this challenge through suitable modification of gol den code such that a less complex sphere decoder could be used without much compromising the error rates. In this paper, a minimum polynomial equation is introduced to obtain a reduc ed golden ratio (RGR number for golden code which demands only for a low complexity decodi ng procedure. One of the attractive approaches used in this paper is that the effective channel matrix has been exploited to perform a single symbol wise decoding instead of grouped sy mbols using a sphere decoder with tree search algorithm. It has been observed that the low decoding complexity of O (q 1.5 is obtained against conventional method of O (q 2.5 . Simulation analysis envisages that in addition t o reduced decoding, improved error rates is also obta ined.
International Nuclear Information System (INIS)
A quantum-information analysis of how the size and dimensionality of the quantum alphabet affect the critical error rate of the quantum-key-distribution (QKD) protocols is given on an example of two QKD protocols--the six-state and ∞-state (i.e., a protocol with continuous alphabet) ones. In the case of a two-dimensional Hilbert space, it is shown that, under certain assumptions, increasing the number of letters in the quantum alphabet up to infinity slightly increases the critical error rate. Increasing additionally the dimensionality of the Hilbert space leads to a further increase in the critical error rate
Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah
2016-07-01
Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown. Here, we estimated genotyping error rates in SNPs genotyped with double digest RAD sequencing from Mendelian incompatibilities in known mother-offspring dyads of Hoffman's two-toed sloth (Choloepus hoffmanni) across a range of coverage and sequence quality criteria, for both reference-aligned and de novo-assembled data sets. Genotyping error rates were more sensitive to coverage than sequence quality and low coverage yielded high error rates, particularly in de novo-assembled data sets. For example, coverage ≥5 yielded median genotyping error rates of ≥0.03 and ≥0.11 in reference-aligned and de novo-assembled data sets, respectively. Genotyping error rates declined to ≤0.01 in reference-aligned data sets with a coverage ≥30, but remained ≥0.04 in the de novo-assembled data sets. We observed approximately 10- and 13-fold declines in the number of loci sampled in the reference-aligned and de novo-assembled data sets when coverage was increased from ≥5 to ≥30 at quality score ≥30, respectively. Finally, we assessed the effects of genotyping coverage on a common population genetic application, parentage assignments, and showed that the proportion of incorrectly assigned maternities was relatively high at low coverage. Overall, our results suggest that the trade-off between sample size and genotyping error rates be considered prior to building sequencing libraries, reporting genotyping error rates become standard practice, and that effects of genotyping errors on inference be evaluated in restriction-enzyme-based SNP studies. PMID:26946083
International Nuclear Information System (INIS)
Highlights: • We assess the requirements for a full-scale experimental validation of the THERP HRA method. • Two estimators are introduced to reduce the number of opportunities for error that must be observed. • We test these estimators with computer-generated data. • We conduct a pilot experiment in a full-scope, digital nuclear power plant simulator. • A powerful, partial-scope validation of the THERP method could be completed in 40 h of observing operators. - Abstract: Science-based Human Reliability Analysis (HRA) seeks to experimentally validate HRA methods in simulator studies. Emphasis is on validating the internal components of the HRA method, rather than the validity and consistency of the final results of the method. In this paper, we assess the requirements for a simulator study validation of the Technique for Human Error Rate Prediction (THERP), a foundational HRA method. The aspects requiring validation include the tables of Human Error Probabilities (HEPs), the treatment of stress, and the treatment of dependence between tasks. We estimate the sample size, n, required to obtain statistically significant error rates for validating HEP values, and the number of observations, m, that constitute one observed error rate for each HEP value. We develop two methods for estimating the mean error rate using few observations. The first method uses the median error rate, and the second method is a Bayesian estimator of the error rate based on the observed errors and the number of observations. Both methods are tested using computer-generated data. We also conduct a pilot experiment in The Ohio State University’s Nuclear Power Plant Simulator Facility. Student operators perform a maintenance task in a BWR simulator. Errors are recorded, and error rates are compared to the THERP-predicted error rates. While the observed error rates are generally consistent with the THERP HEPs, further study is needed to provide confidence in these results as the pilot
Study of the Switching Errors in an RSFQ Switch by Using a Computerized Test Setup
Energy Technology Data Exchange (ETDEWEB)
Kim, Se Hoon; Baek, Seung Hun; Yang, Jung Kuk; Kim, Jun Ho; Kang, Joon Hee [Incheon Univesity, Incheon (Korea, Republic of)
2005-10-15
The problem of fluctuation-induced digital errors in a rapid single flux quantum (RSFQ) circuit has been a very important issue. In this work, we calculated the bit error rate of an RSFQ switch used in superconductive arithmetic logic unit (ALU). RSFQ switch should have a very low error rate in the optimal bias. Theoretical estimates of the RSFQ error rate are on the order of 10{sup -50}per bit operation. In this experiment, we prepared two identical circuits placed in parallel. Each circuit was composed of 10 Josephson transmission lines (JTLs) connected in series with an RSFQ switch placed in the middle of the 10 JTLs. We used a splitter to feed the same input signal to both circuits. The outputs of the two circuits were compared with an RSFQ exclusive OR (XOR) to measure the bit error rate of the RSFQ switch. By using a computerized bit-error-rate test setup, we measured the bit error rate of 2.18 x 10{sup -12} when the bias to the RSFQ switch was 0.398 mA that was quite off from the optimum bias of 0.6 mA.
Study of the Switching Errors in an RSFQ Switch by Using a Computerized Test Setup
International Nuclear Information System (INIS)
The problem of fluctuation-induced digital errors in a rapid single flux quantum (RSFQ) circuit has been a very important issue. In this work, we calculated the bit error rate of an RSFQ switch used in superconductive arithmetic logic unit (ALU). RSFQ switch should have a very low error rate in the optimal bias. Theoretical estimates of the RSFQ error rate are on the order of 10-50per bit operation. In this experiment, we prepared two identical circuits placed in parallel. Each circuit was composed of 10 Josephson transmission lines (JTLs) connected in series with an RSFQ switch placed in the middle of the 10 JTLs. We used a splitter to feed the same input signal to both circuits. The outputs of the two circuits were compared with an RSFQ exclusive OR (XOR) to measure the bit error rate of the RSFQ switch. By using a computerized bit-error-rate test setup, we measured the bit error rate of 2.18 x 10-12 when the bias to the RSFQ switch was 0.398 mA that was quite off from the optimum bias of 0.6 mA.
Ahmed, Qasim Zeeshan
2015-02-01
In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.
Curve fitting and error modeling for the digitization process near the Nyquist rate
Energy Technology Data Exchange (ETDEWEB)
Baumgart, C.W. (EG and G Energy Measurements, Inc., Los Alamos, NM (USA). Los Alamos Operations); Moses, J.D.; Dunham, M.E. (Los Alamos National Lab., NM (USA))
1990-01-01
The Shannon/Nyquist sampling theorems were derived for time-quantized signals which did not include simultaneous amplitude quantization. In addition, underlying assumptions on which these theorems were based are violated in typical use. Therefore, actual practice in data acquisition has been two to three times oversampling of signal bandwidth to conserve accuracy. We report a numerical investigation of digitization process accuracy versus sample rate, sample amplitude resolution, and record length. This investigation is based on the use of curve fitting and Monte Carlo techniques to reconstruct original analog test signals from their ideally digitized representations. Fit sensitivity with respect to each digitization variable is derived from the Monte Carlo analysis. We find that although no specific Nyquist limit exists for a known wave shape, parameter errors vary continuously with respect to the aforementioned variables, and critical sample densities of two to four sample periods per risetime are seen. Plots of curve-fitted parameter error versus fundamental digitization variables are useful in specifying experimental tasks and indicate new directions for reconstruction algorithm development. 15 refs., 18 figs.
Error-rate performance analysis of incremental decode-and-forward opportunistic relaying
Tourki, Kamel
2010-10-01
In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.
International Nuclear Information System (INIS)
The security plans of nuclear plants generally require that all personnel who are to have unescorted access to protected areas or vital islands be screened for emotional instability. Screening typically consists of first administering the MMPI and then conducting a clinical interview. Interviews-by-exception protocols provide for only those employees to be interviewed who have some indications of psychopathology in their MMPI results. A problem arises when the indications are not readily apparent: False negatives are likely to occur, resulting in employees being erroneously granted unescorted access. The present paper describes the development of a predictive equation which permits accurate identification, via analysis of MMPI results, of those employees who are most in need of being interviewed. The predictive equation also permits knowing probably maximum false negative error rates when a given percentage of employees is interviewed
Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates
International Nuclear Information System (INIS)
The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition
Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying
Fareed, Muhammad Mehboob
2014-06-01
In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.
A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems
Directory of Open Access Journals (Sweden)
Shi-Wei Dong
2007-01-01
Full Text Available A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL test loops show that the proposed algorithm is efficient for practical DMT transmissions.
Institute of Scientific and Technical Information of China (English)
LU; Zudi
2001-01-01
［1］Engle, R. F., Granger, C. W. J., Rice, J. et al., Semiparametric estimates of the relation between weather and electricity sales, Journal of the American Statistical Association, 1986, 81: 310.［2］Heckman, N. E., Spline smoothing in partly linear models, Journal of the Royal Statistical Society, Ser. B, 1986, 48: 244.［3］Rice, J., Convergence rates for partially splined models, Statistics & Probability Letters, 1986, 4: 203.［4］Chen, H., Convergence rates for parametric components in a partly linear model, Annals of Statistics, 1988, 16: 136.［5］Robinson, P. M., Root-n-consistent semiparametric regression, Econometrica, 1988, 56: 931.［6］Speckman, P., Kernel smoothing in partial linear models, Journal of the Royal Statistical Society, Ser. B, 1988, 50: 413.［7］Cuzick, J., Semiparametric additive regression, Journal of the Royal Statistical Society, Ser. B, 1992, 54: 831.［8］Cuzick, J., Efficient estimates in semiparametric additive regression models with unknown error distribution, Annals of Statistics, 1992, 20: 1129.［9］Chen, H., Shiau, J. H., A two-stage spline smoothing method for partially linear models, Journal of Statistical Planning & Inference, 1991, 27: 187.［10］Chen, H., Shiau, J. H., Data-driven efficient estimators for a partially linear model, Annals of Statistics, 1994, 22: 211.［11］Schick, A., Root-n consistent estimation in partly linear regression models, Statistics & Probability Letters, 1996, 28: 353.［12］Hamilton, S. A., Truong, Y. K., Local linear estimation in partly linear model, Journal of Multivariate Analysis, 1997, 60: 1.［13］Mills, T. C., The Econometric Modeling of Financial Time Series, Cambridge: Cambridge University Press, 1993, 137.［14］Engle, R. F., Autoregressive conditional heteroscedasticity with estimates of United Kingdom inflation, Econometrica, 1982, 50: 987.［15］Bera, A. K., Higgins, M. L., A survey of ARCH models: properties of estimation and testing, Journal of Economic
Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error
Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju
2009-01-01
Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…
Metal sealed cone bits reduce costs in abrasive drilling
International Nuclear Information System (INIS)
This paper reports on metal sealed rolling cone bits, which have cut drilling cost by increasing the footage drilled per bit and by increasing the penetration rate in several wells in South America. The metal seals double the bearing life compared to conventional elastomer sealed bits, thereby allowing the bit to stay on bottom longer. In Colombia, and operator required that only one bit be used to drill an entire section of hard, abrasive sandstone. In Venezuela, metal sealed bits were used to lower drilling costs in both relatively moderate and aggressive drilling conditions
Changes realized from extended bit-depth and metal artifact reduction in CT
Energy Technology Data Exchange (ETDEWEB)
Glide-Hurst, C.; Chen, D.; Zhong, H.; Chetty, I. J. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan 48202 (United States)
2013-06-15
Purpose: High-Z material in computed tomography (CT) yields metal artifacts that degrade image quality and may cause substantial errors in dose calculation. This study couples a metal artifact reduction (MAR) algorithm with enhanced 16-bit depth (vs standard 12-bit) to quantify potential gains in image quality and dosimetry. Methods: Extended CT to electron density (CT-ED) curves were derived from a tissue characterization phantom with titanium and stainless steel inserts scanned at 90-140 kVp for 12- and 16-bit reconstructions. MAR was applied to sinogram data (Brilliance BigBore CT scanner, Philips Healthcare, v.3.5). Monte Carlo simulation (MC-SIM) was performed on a simulated double hip prostheses case (Cerrobend rods embedded in a pelvic phantom) using BEAMnrc/Dosxyz (400 000 0000 histories, 6X, 10 Multiplication-Sign 10 cm{sup 2} beam traversing Cerrobend rod). A phantom study was also conducted using a stainless steel rod embedded in solid water, and dosimetric verification was performed with Gafchromic film analysis (absolute difference and gamma analysis, 2% dose and 2 mm distance to agreement) for plans calculated with Anisotropic Analytic Algorithm (AAA, Eclipse v11.0) to elucidate changes between 12- and 16-bit data. Three patients (bony metastases to the femur and humerus, and a prostate cancer case) with metal implants were reconstructed using both bit depths, with dose calculated using AAA and derived CT-ED curves. Planar dose distributions were assessed via matrix analyses and using gamma criteria of 2%/2 mm. Results: For 12-bit images, CT numbers for titanium and stainless steel saturated at 3071 Hounsfield units (HU), whereas for 16-bit depth, mean CT numbers were much larger (e.g., titanium and stainless steel yielded HU of 8066.5 {+-} 56.6 and 13 588.5 {+-} 198.8 for 16-bit uncorrected scans at 120 kVp, respectively). MC-SIM was well-matched between 12- and 16-bit images except downstream of the Cerrobend rod, where 16-bit dose was {approx}6
Directory of Open Access Journals (Sweden)
Fatemeh Vizeshfar
2015-06-01
Full Text Available Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic information, self report faults, etiology of medication error and researcher observations. The results showed that nurses’ faults in pediatric wards were 51/6% and in adults wards were 47/4%. The most common faults in adults wards were later or sooner drug administration (48/6%, and administration of drugs without prescription and administering wrong drugs were the most common medication errors in pediatric wards (each one 49/2%. According to researchers’ observations, the medication error rate of 57/9% was rated low in adults wards and the rate of 69/4% in pediatric wards was rated moderate. The most frequent medication errors in both adults and pediatric wards were that nurses didn’t explain the reason and type of drug they were going to administer to patients. Independent T-test showed a significant change in faults observations in pediatric wards (p=0.000 and in adults wards (p=0.000. Several studies have shown medication errors all over the world, especially in pediatric wards. However, by designing a suitable report system and use a multi disciplinary approach, we can be reduced the occurrence of medication errors and its negative consequences.
Friesen, M; Eriksson, M A; Friesen, Mark; Joynt, Robert
2002-01-01
Quantum computers are analog devices; thus they are highly susceptible to accumulative errors arising from classical control electronics. Fast operation--as necessitated by decoherence--makes gating errors very likely. In most current designs for scalable quantum computers it is not possible to satisfy both the requirements of low decoherence errors and low gating errors. Here we introduce a hardware-based technique for pseudo-digital gate operation. We perform self-consistent simulations of semiconductor quantum dots, finding that pseudo-digital techniques reduce operational error rates by more than two orders of magnitude, thus facilitating fast operation.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Directory of Open Access Journals (Sweden)
Mohammad Rakibul Islam
2011-10-01
Full Text Available Cooperative communication in wireless sensor network (WSN explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN by exploiting multiple input multiple output (MIMO and multiple input single output (MISO configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO technique is proposed where low density parity check (LDPC code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
... keep the flea population down. Wearing an insect repellent also may help. Ask your parents to apply one that contains 10%–30% ... A Chigger Bit Me! Hey! A Mosquito Bit Me! Hey! A Tick Bit Me! What ...
A hardware Gaussian noise generator using the Box-Muller method and its error analysis
Lee, D.; Villasenor, J D; Luk, W; Leong, P. H. W.
2006-01-01
We present a hardware Gaussian noise generator based on the Box-Muller method that provides highly accurate noise samples. The noise generator can be used as a key component in a hardware-based simulation system, such as for exploring channel code behavior at very low bit error rates, as low as 10(-12) to 10(-13). The main novelties of this work are accurate analytical error analysis and bit-width optimization for the elementary functions involved in the Box-Muller method. Two 16-bit noise sa...
Four-Dimensional Coded Modulation with Bit-wise Decoders for Future Optical Communications
Alvarado, Alex
2014-01-01
Coded modulation (CM) is the combination of forward error correction (FEC) and multilevel constellations. Coherent optical communication systems result in a four-dimensional (4D) signal space, which naturally leads to 4D-CM transceivers. A practically attractive design paradigm is to use a bit-wise decoder, where the detection process is (suboptimally) separated into two steps: soft-decision demapping followed by binary decoding. In this paper, bit-wise decoders are studied from an information-theoretic viewpoint. 4D constellations with up to 4096 constellation points are considered. Metrics to predict the post-FEC bit-error rate (BER) of bit-wise decoders are analyzed. The mutual information is shown to fail at predicting the post-FEC BER of bit-wise decoders and the so-called generalized mutual information is shown to be a much more robust metric. It is also shown that constellations that transmit and receive information in each polarization and quadrature independently (e.g., PM-QPSK, PM-16QAM, and PM-64QA...
Burton A. Abrams; Siyan Wang
2006-01-01
In this paper, we investigate the relationship between government size and the unemployment rate using a structural error correction model that describes both the short-run dynamics and long-run determination of the unemployment rate. Using data from twenty OECD countries from 1970 to 1999, we find that government size, measured as total government outlays as a percentage of GDP, plays a significant role in affecting the steady-state unemployment rate. We disaggregate government outlays and f...
Westbrook, Johanna I.; Baysari, Melissa T.; Li, Ling; Burke, Rosemary; Richardson, Katrina L; Day, Richard O.
2013-01-01
Objectives To compare the manifestations, mechanisms, and rates of system-related errors associated with two electronic prescribing systems (e-PS). To determine if the rate of system-related prescribing errors is greater than the rate of errors prevented. Methods Audit of 629 inpatient admissions at two hospitals in Sydney, Australia using the CSC MedChart and Cerner Millennium e-PS. System related errors were classified by manifestation (eg, wrong dose), mechanism, and severity. A mechanism ...
Bias and spread in extreme value theory measurements of probability of error
Smith, J. G.
1972-01-01
Extreme value theory is examined to explain the cause of the bias and spread in performance of communications systems characterized by low bit rates and high data reliability requirements, for cases in which underlying noise is Gaussian or perturbed Gaussian. Experimental verification is presented and procedures that minimize these effects are suggested. Even under these conditions, however, extreme value theory test results are not particularly more significant than bit error rate tests.
Bit and Power Loading Approach for Broadband Multi-Antenna OFDM System
DEFF Research Database (Denmark)
Rahman, Muhammad Imadur; Das, Suvra S.; Wang, Yuanye;
2007-01-01
cannot find the exact Signal to Noise Ratio (SNR) thresholds due to different reasons, such as reduced Link Adaptation (LA) rate, Channel State Information (CSI) error, feedback delay etc., it is better to fix the transmit power across all sub-channels to guarantee the target Frame Error Rate (FER......). Otherwise, it is possible to use adaptive power distribution to save power, which can be used for other purposes, or to increase the throughput of the system by transmitting higher number of bits. We also observed that in some scenarios and in some system conditions, some form of simultaneous bit and power...... allocations across OFDM sub-channels are required together for efficient exploitation of wireless channel....
A Coded Bit-Loading Linear Precoded Discrete Multitone Solution for Power Line Communication
Muhammad, Fahad Syed; Hélard, Jean-François; Crussière, Matthieu
2008-01-01
Linear precoded discrete multitone modulation (LP-DMT) system has been already proved advantageous with adaptive resource allocation algorithm in a power line communication (PLC) context. In this paper, we investigate the bit and energy allocation algorithm of an adaptive LP-DMT system taking into account the channel coding scheme. A coded adaptive LP-DMT system is presented in the PLC context with a loading algorithm which ccommodates the channel coding gains in bit and energy calculations. The performance of a concatenated channel coding scheme, consisting of an inner Wei's 4-dimensional 16-states trellis code and an outer Reed-Solomon code, in combination with the roposed algorithm is analyzed. Simulation results are presented for a fixed target bit error rate in a multicarrier scenario under power spectral density constraint. Using a multipath model of PLC channel, it is shown that the proposed coded adaptive LP-DMT system performs better than classical coded discrete multitone.
High performance 14-bit pipelined redundant signed digit ADC
Narula, Swina; Pandey, Sujata
2016-03-01
A novel architecture of a pipelined redundant-signed-digit analog to digital converter (RSD-ADC) is presented featuring a high signal to noise ratio (SNR), spurious free dynamic range (SFDR) and signal to noise plus distortion (SNDR) with efficient background correction logic. The proposed ADC architecture shows high accuracy with a high speed circuit and efficient utilization of the hardware. This paper demonstrates the functionality of the digital correction logic of 14-bit pipelined ADC at each 1.5 bit/stage. This prototype of ADC architecture accounts for capacitor mismatch, comparator offset and finite Op-Amp gain error in the MDAC (residue amplification circuit) stages. With the proposed architecture of ADC, SNDR obtained is 85.89 dB, SNR is 85.9 dB and SFDR obtained is 102.8 dB at the sample rate of 100 MHz. This novel architecture of digital correction logic is transparent to the overall system, which is demonstrated by using 14-bit pipelined ADC. After a latency of 14 clocks, digital output will be available at every clock pulse. To describe the circuit behavior of the ADC, VHDL and MATLAB programs are used. The proposed architecture is also capable of reducing the digital hardware. Silicon area is also the complexity of the design.
Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J.
2014-01-01
Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural ‘mutation’, can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such ‘time constraints’ affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D ‘target handaxe form’ using a standardized foam block and a plastic knife. Three distinct ‘time conditions’ were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific ‘threshold’ might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a ‘threshold’ effect, below which mutation rates increase more markedly. Our results also suggest that ‘time budgets’ available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, ‘time-budgeting’ factors need to be given greater consideration in evolutionary models of material culture change. PMID:24809848
Directory of Open Access Journals (Sweden)
VINOTH BABU K.
2016-04-01
Full Text Available Multi input multi output (MIMO and orthogonal frequency division multiplexing (OFDM are the key techniques for the future wireless communication systems. Previous research in the above areas mainly concentrated on spectral efficiency improvement and very limited work has been done in terms of energy efficient transmission. In addition to spectral efficiency improvement, energy efficiency improvement has become an important research because of the slow progressing nature of the battery technology. Since most of the user equipments (UE rely on battery, the energy required to transmit the target bits should be minimized to avoid quick battery drain. The frequency selective fading nature of the wireless channel reduces the spectral and energy efficiency of OFDM based systems. Dynamic bit loading (DBL is one of the suitable solution to improve the spectral and energy efficiency of OFDM system in frequency selective fading environment. Simple dynamic bit loading (SDBL algorithm is identified to offer better energy efficiency with less system complexity. It is well suited for fixed data rate voice/video applications. When the number of target bits are very much larger than the available subcarriers, the conventional single input single output (SISO-SDBL scheme offers high bit error rate (BER and needs large transmit energy. To improve bit error performance we combine space frequency block codes (SFBC with SDBL, where the adaptations are done in both frequency and spatial domain. To improve the quality of service (QoS further, optimal transmit antenna selection (OTAS scheme is also combined with SFBC-SDBL scheme. The simulation results prove that the proposed schemes offer better QoS when compared to the conventional SISOSDBL scheme.
Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L.; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L.; Guerin, Bastien
2016-01-01
Purpose A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. Methods The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors (“worst-case SAR”) is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Results Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled “worst-case SAR” in the presence of errors of this magnitude at minor cost of the excitation profile quality. Conclusion Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. PMID:26147916
Pech, Ponia; Robert, Marie; DUVERDIER, Alban; Bousquet, Michel
2010-01-01
This chapter has presented the architecture of a multibeam, bent-pipe satellite system, used to rapidly establish a low bit rate link for emergency communications in Ku/Ka and Q/V bands. The characteristics of the proposed system have been described. An enhanced DVBS2-like air interface has been proposed, involving DS-SS technique and other adaptive mechanisms such power control, and site diversity. Link budget analyses have shown that even though SS may be deactivated, very low transmit powe...
Bit Loading Algorithms for Cooperative OFDM Systems
Directory of Open Access Journals (Sweden)
Gui Bo
2008-01-01
Full Text Available Abstract We investigate the resource allocation problem for an OFDM cooperative network with a single source-destination pair and multiple relays. Assuming knowledge of the instantaneous channel gains for all links in the entire network, we propose several bit and power allocation schemes aiming at minimizing the total transmission power under a target rate constraint. First, an optimal and efficient bit loading algorithm is proposed when the relay node uses the same subchannel to relay the information transmitted by the source node. To further improve the performance gain, subchannel permutation, in which the subchannels are reallocated at relay nodes, is considered. An optimal subchannel permutation algorithm is first proposed and then an efficient suboptimal algorithm is considered to achieve a better complexity-performance tradeoff. A distributed bit loading algorithm is also proposed for ad hoc networks. Simulation results show that significant performance gains can be achieved by the proposed bit loading algorithms, especially when subchannel permutation is employed.
Soury, Hamza
2014-06-01
This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox\\'s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.
Energy Technology Data Exchange (ETDEWEB)
Wojtas, H
2004-07-01
The main source of errors in measuring the corrosion rate of rebars on site is a non-uniform current distribution between the small counter electrode (CE) on the concrete surface and the large rebar network. Guard ring electrodes (GEs) are used in an attempt to confine the excitation current within a defined area. In order to better understand the functioning of modulated guard ring electrode and to assess its effectiveness in eliminating errors due to lateral spread of current signal from the small CE, measurements of the polarisation resistance performed on a concrete beam have been numerically simulated. Effect of parameters such as rebar corrosion activity, concrete resistivity, concrete cover depth and size of the corroding area on errors in the estimation of polarisation resistance of a single rebar has been examined. The results indicate that modulated GE arrangement fails to confine the lateral spread of the CE current within a constant area. Using the constant diameter of confinement for the calculation of corrosion rate may lead to serious errors when test conditions change. When high corrosion activity of rebar and/or local corrosion occur, the use of the modulated GE confinement may lead to significant underestimation of the corrosion rate.
Sharp Threshold Detection Based on Sup-norm Error rates in High-dimensional Models
DEFF Research Database (Denmark)
Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl;
2015-01-01
We propose a new estimator, the thresholded scaled Lasso, in high dimensional threshold regressions. First, we establish an upper bound on the ℓ∞ estimation error of the scaled Lasso estimator of Lee et al. (2015). This is a non-trivial task as the literature on high-dimensional models has focuse...
Sharp Threshold Detection Based on Sup-norm Error rates in High-dimensional Models
DEFF Research Database (Denmark)
Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl; Riquelme, Juan Andres
We propose a new estimator, the thresholded scaled Lasso, in high dimensional threshold regressions. First, we establish an upper bound on the sup-norm estimation error of the scaled Lasso estimator of Lee et al. (2012). This is a non-trivial task as the literature on highdimensional models has f...... private) and GDP growth....
Drill bits technology - introduction of the new kymera hybrid bit
Nguyen, Don Tuan
2012-01-01
The early concepts of hybrid bits date back to the 1930’s but have only been a viable drilling tool with recent polycrystalline diamond compact technology. Improvements in drilling performance around the world continue to focus on stability and efficiency in key applications. This thesis briefly describes a new generation of hybrid bits that are based on PDC bit design combined with roller cones. Bit related failure is a common problem in today’s drilling environment, leading to inefficien...
Tyson, Jon
2009-01-01
We compare several instances of pure-state Belavkin weighted square-root measurements from the standpoint of minimum-error discrimination of quantum states. The quadratically weighted measurement is proven superior to the so-called "pretty good measurement" (PGM) in a number of respects: (1) Holevo's quadratic weighting unconditionally outperforms the PGM in the case of two-state ensembles, with equality only in trivial cases. (2) A converse of a theorem of Holevo is proven, showing that a we...
Williams, J M
1999-01-01
In the particle in the box problem, the particle is not in both boxes at the same time as some would have you believe. It is a set definition situation with the two boxes being part of a set that also contains a particle. Set and subset differences are explored. Atomic electron orbitals can be mimicked by roulette wheel probability; thus ELECTRONIC ROULETTE. 0 and 00 serve as boundary limits and are on opposite sides of the central core - a point that quantum physics ignores. Considering a stray marble on the floor as part of the roulette wheel menage is taking assumptions a bit too far. Likewise, the attraction between a positive and negative charge at distance does not make the negative charge part of the positive charge's orbital system. This, of course, is contrary to the stance of current quantum physics methodology that carries this orbital association a bit too far.
Demonstration of a Bit-Flip Correction for Enhanced Sensitivity Measurements
Cohen, L; Istrati, D; Retzker, A; Eisenberg, H S
2016-01-01
The sensitivity of classical and quantum sensing is impaired in a noisy environment. Thus, one of the main challenges facing sensing protocols is to reduce the noise while preserving the signal. State of the art quantum sensing protocols that rely on dynamical decoupling achieve this goal under the restriction of long noise correlation times. We implement a proof of principle experiment of a protocol to recover sensitivity by using an error correction for photonic systems that does not have this restriction. The protocol uses a protected entangled qubit to correct a bit-flip error. Our results show a recovery of about 87% of the sensitivity, independent of the noise rate.
DRILL BITS FOR HORIZONTAL WELLS
Paolo Macini
1996-01-01
This paper underlines the importance of the correct drill bit application in horizontal wells. Afler the analysis of the peculiarities of horizontal wells and drainholes drilling techniques, advantages and disadvantages of the application of both roller cone and fixed cutters drill bits have been discussed. Also, a review of the potential specific featuries useful for a correct drill bit selection in horizontal small diameter holes has been highlighted. Drill bits for these special applicatio...
Adaptive Error Resilience for Video Streaming
Directory of Open Access Journals (Sweden)
Lakshmi R. Siruvuri
2009-01-01
Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.
Directory of Open Access Journals (Sweden)
Brijesh Kumbhani
2014-01-01
Full Text Available Closed form expressions for approximate symbol error rate are obtained using moment generating function for a two branch cooperative communication system over generalised κ−μ and η−μ i.i.d. fading channels for BPSK and QAM modulation schemes. Selective decode and forward protocol is used at the relay transmitter. At the destination maximal-ratio combining is used. Monte Carlo simulations are performed to verify the analytical results.
The type I error rate for in vivo Comet assay data when the hierarchical structure is disregarded
Hansen, Merete Kjær; Kulahci, Murat
2014-01-01
The Comet assay is a sensitive technique for detection of DNA strand breaks. The experimental design of in vivo Comet assay studies are often hierarchically structured, which should be reWected in the statistical analysis. However, the hierarchical structure sometimes seems to be disregarded, and this imposes considerable impact on the type I error rate. This study aims to demonstrate the implications that result from disregarding the hierarchical structure. DiUerent combinations of the facto...
Nakajima Jouchi
2013-01-01
A Bayesian analysis of the stochastic volatility model with regime-switching skewness in heavy-tailed errors is proposed using a generalized hyperbolic (GH) skew Student’s t-distribution. The skewness parameter is allowed to shift according to a first-order Markov switching process. We summarize Bayesian methods for model fitting and discuss analyses of exchange rate return time series. Empirical results show that interpretable regime-switching skewness can improve model fit and Value-at-Risk...
Cameron, Kenneth L.; Peck, Karen Y.; Owens, Brett D.; Svoboda, Steven J.; DiStefano, Lindsay J.; Marshall, Stephen W.; de la Motte, Sarah; Beutler, Anthony I.; Padua, Darin A.
2014-01-01
Objectives: Lower-extremity stress fracture injuries are a major cause of morbidity in physically active populations. The ability to efficiently screen for modifiable risk factors associated with injury is critical in developing and implementing effective injury prevention programs. The purpose of this study was to determine if baseline Landing Error Scoring System (LESS) scores were associated with the incidence rate of lower-extremity stress fracture during four years of follow-up. Methods:...
Error forecasting schemes of error correction at receiver
International Nuclear Information System (INIS)
To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)
Effects of error feedback on a nonlinear bistable system with stochastic resonance
International Nuclear Information System (INIS)
In this paper, we discuss the effects of error feedback on the output of a nonlinear bistable system with stochastic resonance. The bit error rate is employed to quantify the performance of the system. The theoretical analysis and the numerical simulation are presented. By investigating the performances of the nonlinear systems with different strengths of error feedback, we argue that the presented system may provide guidance for practical nonlinear signal processing
Forward error correction in optical ethernet communications
Oliveras Boada, Jordi
2014-01-01
[ANGLÈS] A way of incrementing the amount of information sent through an optical fibre is ud-WDM (ultra dense – Wavelength Division Multiplexing). The problem is that the sensitivity of the receiver requires certain SNR (Signal Noise Ratio) that are only achieved in low distances, so to increase them a codification called FEC (Forward Error Correction) can be used. This should reduce the BER (Bit Error Rate) at the receiver letting the signal to be transmitted to longer distances. Another pro...
Tyson, Jon
2009-01-01
We compare several instances of pure-state Belavkin weighted square-root measurements from the standpoint of minimum-error discrimination of quantum states. The quadratically weighted measurement is proven superior to the so-called "pretty good measurement" (PGM) in a number of respects: (1) Holevo's quadratic weighting unconditionally outperforms the PGM in the case of two-state ensembles, with equality only in trivial cases. (2) A converse of a theorem of Holevo is proven, showing that a weighted measurement is asymptotically optimal only if it is quadratically weighted. Counterexamples for three states are constructed. The cube-weighted measurement of Ballester, Wehner, and Winter is also considered. Sufficient optimality conditions for various weights are compared.
The effect of administrative boundaries and geocoding error on cancer rates in California.
Goldberg, Daniel W; Cockburn, Myles G
2012-04-01
Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods. PMID:22469490
Evaluation of Bit Preservation Strategies
DEFF Research Database (Denmark)
Zierau, Eld; Kejser, Ulla Bøgvad; Kulovits, Hannes
2010-01-01
This article describes a methodology which supports evaluation of bit preservation strategies for different digital materials. This includes evaluation of alternative bit preservation solution. The methodology presented uses the preservation planning tool Plato for evaluations, and a BR......-ReMS prototype to calculate measures for how well bit preservation requirements are met. Planning storage of data as part of preservation planning involves classification of data with regard to requirements on confidentiality, bit safety, available and costs. Choice of storage with such parameters is quite...... complex since e.g. more copies of data means better bit safety, but higher cost and bigger risk of breaking confidentiality. Based on a case of a bit repository offering varied bit preservation solutions, the article will present results of using the methodology to make plans and choices of alternatives...
Duyck, Dieter; Capirone, Daniele; Moeneclaey, Marc
2012-01-01
Joint network-channel codes (JNCC) can improve the performance of communication in wireless networks, by combining, at the physical layer, the channel codes and the network code as an overall error-correcting code. JNCC is increasingly proposed as an alternative to a standard layered construction, such as the OSI-model. The main performance metrics for JNCCs are scalability to larger networks and error rate. The diversity order is one of the most important parameters determining the error rate. The literature on JNCC is growing, but a rigorous diversity analysis is lacking, mainly because of the many degrees of freedom in wireless networks, which makes it very hard to prove general statements on the diversity order. In this paper, we consider a network with slowly varying fading point-to-point links, where all sources also act as relay and additional non-source relays may be present. We propose a general structure for JNCCs to be applied in such network. In the relay phase, each relay transmits a linear trans...
The Effect of Administrative Boundaries and Geocoding Error on Cancer Rates in California
Goldberg, Daniel W.; Cockburn, Myles G.
2012-01-01
Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located an...
Improving Residual Error Rate of CAN Protocol%CAN协议的错帧漏检率改进
Institute of Scientific and Technical Information of China (English)
杨福宇
2011-01-01
Limited literatures were published about the research of undetected frame error rate of CAN protocol. They were based on the software fault injection. Though the simulation was exhausting, but in comparing with the possible quantity of error cases it was a very small sampling. Hence the conclusion based on that simulation is less persuasive. Present paper gives a reconstruct method of error undetected frame. Based on this method the lower boundary of undetected error rate obtained is several orders higher than that claimed by Bosch specification 2. 0. This has an impaction on the user. As the applications are so widely in use, it is an urgent need to fix this problem. Present paper provides a software patch. It radically eliminates the stuffing rule's disturbs on CRC check.%过去对CAN的漏检错帧概率的研究很有限,数据的获得主要依靠大量的仿真测试.由于要仿真的量太大,实际上仿真的仍然是极小的样本,所以得到的漏检错帧概率可信性不足.本文介绍了漏检实例的构造方法,从而进行漏检错帧概率下限的分析计算.得到的CAN协议的漏检错帧概率远大于以前的结论,因此对CAN的应用有巨大的冲击.由于已有大量应用必须加以改进,提出了改进的软件补救措施,它从根本上解决了填充规则对CAN错帧漏检率的影响.
Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok
2005-08-01
Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of
Directory of Open Access Journals (Sweden)
Shilpa Jindal
2013-05-01
Full Text Available We present transmission of five users with 5 WDM × 4 TDM × 5 CODE channel on 3D OCDMA system based on Model B using GF (5 with var ying receiver attenuation at 1Gbps, 2 Gbps, 5Gbps and 10Gbps data rates on OPTSIM.
8-Bit superconducting A/D converter
International Nuclear Information System (INIS)
The design, fabrication and testing of a superconducting 8-bit converter are presented. Experimental results show essentially monotonic output code at conversion rates of a few megahertz. An algorithm for automatic adjustment and potential problems of higher speed operation are discussed
Directory of Open Access Journals (Sweden)
Bentsen R. G.
2006-12-01
Full Text Available Indirect methods are commonly employed to determine the fundamental flow properties needed to describe flow through porous media. Consequently, if one or more of the postulates underlying the mathematical description of such indirect methods is invalid, significant model error can be introduced into the measured value of the flow property. In particular, this study shows that effective mobility curves that include the effect of viscous coupling between fluid phases differ significantly from those that exclude such coupling. Moreover, it is shown that the conventional effective mobilities that pertain to steady-state, cocurrent flow, steady-state, countercurrent flow and pure countercurrent imbibition differ significantly. Thus, it appears that traditional effective mobilities are not true parameters; rather, they are infinitely nonunique. In addition, it is shown that, while neglect of hydrodynamic forces introduces a small amount of model error into the pressure difference curve for cocurrent flow in unconsolidated porous media, such neglect introduces a large amount of model error into the pressure difference curve for countercurrent flow in such porous media. Moreover, such neglect makes it difficult to explain why the pressure gradients that pertain to steady-state, countercurrent flow are opposite in sign. It is shown also that improper handling of the inlet boundary condition can introduce significant model error into the analysis. This is because, if a short core is used with one of the unsteady-state methods for determining effective mobility, it may take many pore volumes of injection before the inlet saturation rises to its maximal value, which is in contradiction with the usual assumption that the inlet saturation rises immediately to its maximal value. Finally, it is pointed out that, because of differences in flow regime and scale, the effective mobilities measured in the laboratory may not be appropriate for inclusion in the data
Joint adaptive modulation and diversity combining with feedback error compensation
Choi, Seyeong
2009-11-01
This letter investigates the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of error-free feedback channels. We also propose to utilize adaptive diversity to compensate for the performance degradation due to feedback error. We accurately quantify the performance of the joint AMDC scheme in the presence of feedback error, in terms of the average number of combined paths, the average spectral efficiency, and the average bit error rate. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. It is observed that the proposed compensation strategy can offer considerable error performance improvement with little loss in processing power and spectral efficiency in comparison with the no compensation case. Copyright © 2009 IEEE.
Glaser, Paul H.; Volin, John C.; Givnish, Thomas J.; Hansen, Barbara C. S.; Stricker, Craig A.
2012-09-01
Tropical and subtropical wetlands are considered to be globally important sources of greenhouse gases, but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida in order to assess these problems and determine the factors that could govern carbon accumulation in this large subtropical wetland. Accelerator mass spectroscopy dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion (0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands.
Glaser, P. H.; Volin, J. C.; Givnish, T. J.; Hansen, B. C.; Stricker, C. A.
2012-12-01
Tropical and sub-tropical wetlands are considered to be globally important sources for greenhouse gases but their capacity to store carbon is presumably limited by warm soil temperatures and high rates of decomposition. Unfortunately, these assumptions can be difficult to test across long timescales because the chronology, cumulative mass, and completeness of a sedimentary profile are often difficult to establish. We therefore made a detailed analysis of a core from the principal drainage outlet of the Everglades of South Florida, to assess these problems and determine the factors that could govern carbon accumulation in this large sub-tropical wetland. AMS-14C dating provided direct evidence for both hard-water and open-system sources of dating errors, whereas cumulative mass varied depending upon the type of method used. Radiocarbon dates of gastropod shells, nevertheless, seemed to provide a reliable chronology for this core once the hard-water error was quantified and subtracted. Long-term accumulation rates were then calculated to be 12.1 g m-2 yr-1 for carbon, which is less than half the average rate reported for northern and tropical peatlands. Moreover, accumulation rates remained slow and relatively steady for both organic and inorganic strata, and the slow rate of sediment accretion ( 0.2 mm yr-1) tracked the correspondingly slow rise in sea level (0.35 mm yr-1 ) reported for South Florida over the past 4000 years. These results suggest that sea level and the local geologic setting may impose long-term constraints on rates of sediment and carbon accumulation in the Everglades and other wetlands
Lau, KN
1999-01-01
We have evaluated the information theoretical performance of variable rate adaptive channel coding for Rayleigh fading channels. The channel states are detected at the receiver and fed back to the transmitter by means of a noiseless feedback link. Based on the channel state informations, the transmitter can adjust the channel coding scheme accordingly. Coherent channel and arbitrary channel symbols with a fixed average transmitted power constraint are assumed. The channel capacity and the err...
The prevalence rates of refractive errors among children, adolescents, and adults in Germany
Jobke, Sandra
2008-01-01
Sandra Jobke1, Erich Kasten2, Christian Vorwerk31Institute of Medical Psychology, 3Department of Ophthalmology, Otto-von Guericke-University of Magdeburg, Magdeburg, Germany; 2Institute of Medical Psychology, University Hospital Schleswig-Holstein, Luebeck, GermanyPurpose: The prevalence rates of myopia vary between 5% in Australian Aborigines to 84% in Hong Kong and Taiwan, 30% in Norwegian adults, and 49.5% in Swedish schoolchildren. The aim of this study was to determine the prevalence of ...
On the Error Rate Analysis of Dual-Hop Amplify-and-Forward Relaying in Generalized-K Fading Channels
Efthymoglou, George P.; Nikolaos Bissias; Valentine A. Aalo
2010-01-01
We present novel and easy-to-evaluate expressions for the error rate performance of cooperative dual-hop relaying with maximal ratio combining operating over independent generalized- fading channels. For this system, it is hard to obtain a closed-form expression for the moment generating function (MGF) of the end-to-end signal-to-noise ratio (SNR) at the destination, even for the case of a single dual-hop relay link. Therefore, we employ two different upper bound approximations for the outp...
Energy Technology Data Exchange (ETDEWEB)
Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL
2015-01-01
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
Energy Technology Data Exchange (ETDEWEB)
Andersen, Claus E.; Nielsen, Soeren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari [Radiation Research Division, Risoe National Laboratory for Sustainable Energy, Technical University of Denmark, DK-4000 Roskilde (Denmark); Department of Medical Physics, Aarhus University Hospital, DK-8000 Aarhus C (Denmark); Department of Oncology, Aarhus University Hospital, DK-8000 Aarhus C (Denmark); Department of Medical Physics, Aarhus University Hospital, DK-8000 Aarhus C (Denmark)
2009-11-15
Purpose: The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Methods: Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with {sup 192}Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from {+-}5 to {+-}15 mm) were simulated in software in order to assess the ability of the system to detect errors. Results: For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when
On the feedback error compensation for adaptive modulation and coding scheme
Choi, Seyeong
2011-11-25
In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify the performance of two joint AMDC schemes in the presence of feedback error, in terms of the average spectral efficiency, the average number of combined paths, and the average bit error rate. The benefit of feedback error compensation with adaptive combining is also quantified. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. Copyright (c) 2011 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Zooplankton fed 32P-labeled yeast or 14C-labeled algae were preserved with Formalin, ethanol, or Lugol's iodine and the subsequent loss of labeled materials was followed by analysis of sample filtrates. The commonly used combination of 32P-labeled yeast and Formalin preservation produced maximal loss in both magnitude and duration, reaching a value of 73% loss after 3 days; ethanol preservation resulted in only 5% loss for the same food. Lugol's iodine yielded the best results for animals fed 14C-labeled algae, resulting in a 40% loss that stabilized within 3 h. Nonchemical preservation (heat-killing and drying) produced filtering rates comparable with those of the best chemical preservative
Giga-bit optical data transmission module for Beam Instrumentation
Roedne, L T; Cenkeramaddi, L R; Jiao, L
Particle accelerators require electronic instrumentation for diagnostic, assessment and monitoring during operation of the transferring and circulating beams. A sensor located near the beam provides an electrical signal related to the observable quantity of interest. The front-end electronics provides analog-to-digital conversion of the quantity being observed and the generated data are to be transferred to the external digital back-end for data processing, and to display to the operators and logging. This research project investigates the feasibility of radiation-tolerant giga-bit data transmission over optic fibre for beam instrumentation applications, starting from the assessment of the state of the art technology, identification of challenges and proposal of a system level solution, which should be validated with a PCB design in an experimental setup. Radiation tolerance of 10 kGy (Si) Total Ionizing Dose (TID) over 10 years of operation, Bit Error Rate (BER) 10-6 or better. The findings and results of th...
Directory of Open Access Journals (Sweden)
Demirhan Erdal
2015-01-01
Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.
International Nuclear Information System (INIS)
It is needed to deduct the response to cosmic-ray from meter readings R for measuring terrestrial gamma radiation dose rate Dr by scintillation gamma dose rate meter with energy compensation. For this purpose, there are two methods, Dr,1 and Dr,2, usually adopted. The results showed that as Dr = 6.0 x 10-8 Gy·h-1, the method errors of the two deduction methods were less than 3.5 and 2.5% for single measuring point value respectively; and 2.0 and 1.5% respectively for average value from measuring points distributed uniformly over a region of altitude (from 1000 to 1500 m) respectively. The characteristics on applicability of the two methods were also discussed
Positional Information, in bits
Dubuis, Julien; Bialek, William; Wieschaus, Eric; Gregor, Thomas
2010-03-01
Pattern formation in early embryonic development provides an important testing ground for ideas about the structure and dynamics of genetic regulatory networks. Spatial variations in the concentration of particular transcription factors act as ``morphogens,'' driving more complex patterns of gene expression that in turn define cell fates, which must be appropriate to the physical location of the cells in the embryo. Thus, in these networks, the regulation of gene expression serves to transmit and process ``positional information.'' Here, using the early Drosophila embryo as a model system, we measure the amount of positional information carried by a group of four genes (the gap genes Hunchback, Kr"uppel, Giant and Knirps) that respond directly to the primary maternal morphogen gradients. We find that the information carried by individual gap genes is much larger than one bit, so that their spatial patterns provide much more than the location of an ``expression boundary.'' Preliminary data indicate that, taken together these genes provide enough information to specify the location of every row of cells along the embryo's anterior-posterior axis.
Directory of Open Access Journals (Sweden)
Donald W. Zimmerman
2004-01-01
Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.
International Nuclear Information System (INIS)
The Computerized Procedures Manual (COPMA-II) is an advanced procedure manual that can be used to select and execute procedures, to monitor the state of plant parameters, and to help operators track their progress through plant procedures. COPMA-II was evaluated in a study that compared the speed and accuracy of operators' performance when they performed with COPMA-II and traditional paper procedures. Sixteen licensed reactor operators worked in teams of two to operate the Scales Pressurized Water Reactor Facility at North Carolina State University. Each team performed one change of power with each type of procedure to simulate performance under normal operating conditions. Teams then performed one accident scenario with COPMA-II and one with paper procedures. Error rates, performance times, and subjective estimates of workload were collected, and were evaluated for each combination of procedure type and scenario type. For the change of power task, accuracy and response time were not different for COPMA-II and paper procedures. Operators did initiate responses to both accident scenarios fastest with paper procedures. However, procedure type did not moderate response completion time for either accident scenario. For accuracy, performance with paper procedures resulted in twice as many errors as did performance with COPMA-II. Subjective measures of mental workload for the accident scenarios were not affected by procedure type
Test results judgment method based on BIT faults
Institute of Scientific and Technical Information of China (English)
Wang Gang; Qiu Jing; Liu Guanjun; Lyu Kehong
2015-01-01
Built-in-test (BIT) is responsible for equipment fault detection, so the test data correct-ness directly influences diagnosis results. Equipment suffers all kinds of environment stresses, such as temperature, vibration, and electromagnetic stress. As embedded testing facility, BIT also suffers from these stresses and the interferences/faults are caused, so that the test course is influenced, resulting in incredible results. Therefore it is necessary to monitor test data and judge test failures. Stress monitor and BIT self-diagnosis would redound to BIT reliability, but the existing anti-jamming researches are mainly safeguard design and signal process. This paper focuses on test results monitor and BIT equipment (BITE) failure judge, and a series of improved approaches is proposed. Firstly the stress influences on components are illustrated and the effects on the diagnosis results are summarized. Secondly a composite BIT program is proposed with information integra-tion, and a stress monitor program is given. Thirdly, based on the detailed analysis of system faults and forms of BIT results, the test sequence control method is proposed. It assists BITE failure judge and reduces error probability. Finally the validation cases prove that these approaches enhance credibility.
Analysis of bit-rock interaction during stick-slip vibrations using PDC cutting force model
Energy Technology Data Exchange (ETDEWEB)
Patil, P.A.; Teodoriu, C. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany). ITE
2013-08-01
Drillstring vibration is one of the limiting factors maximizing the drilling performance and also causes premature failure of drillstring components. Polycrystalline diamond compact (PDC) bit enhances the overall drilling performance giving the best rate of penetrations with less cost per foot but the PDC bits are more susceptible to the stick slip phenomena which results in high fluctuations of bit rotational speed. Based on the torsional drillstring model developed using Matlab/Simulink for analyzing the parametric influence on stick-slip vibrations due to drilling parameters and drillstring properties, the study of relations between weight on bit, torque on bit, bit speed, rate of penetration and friction coefficient have been analyzed. While drilling with the PDC bits, the bit-rock interaction has been characterized by cutting forces and the frictional forces. The torque on bit and the weight on bit have both the cutting component and the frictional component when resolved in horizontal and vertical direction. The paper considers that the bit is undergoing stick-slip vibrations while analyzing the bit-rock interaction of the PDC bit. The Matlab/Simulink bit-rock interaction model has been developed which gives the average cutting torque, T{sub c}, and friction torque, T{sub f}, values on cutters as well as corresponding average weight transferred by the cutting face, W{sub c}, and the wear flat face, W{sub f}, of the cutters value due to friction.
Reinforcement Learning in BitTorrent Systems
Izhak-Ratzin, Rafit; van der Schaar, Mihaela
2010-01-01
Recent research efforts have shown that the popular BitTorrent protocol does not provide fair resource reciprocation and may allow free-riding. In this paper, we propose a BitTorrent-like protocol that replaces the peer selection mechanisms in the regular BitTorrent protocol with a novel reinforcement learning (RL) based mechanism. Due to the inherent opration of P2P systems, which involves repeated interactions among peers over a long period of time, the peers can efficiently identify free-riders as well as desirable collaborators by learning the behavior of their associated peers. Thus, it can help peers improve their download rates and discourage free-riding, while improving fairness in the system. We model the peers' interactions in the BitTorrent-like network as a repeated interaction game, where we explicitly consider the strategic behavior of the peers. A peer, which applies the RL-based mechanism, uses a partial history of the observations on associated peers' statistical reciprocal behaviors to deter...
Bit Preservation: A Solved Problem?
Directory of Open Access Journals (Sweden)
David S. H. Rosenthal
2010-07-01
Full Text Available For years, discussions of digital preservation have routinely featured comments such as “bit preservation is a solved problem; the real issues are ...”. Indeed, current digital storage technologies are not just astoundingly cheap and capacious, they are astonishingly reliable. Unfortunately, these attributes drive a kind of “Parkinson’s Law” of storage, in which demands continually push beyond the capabilities of systems implementable at an affordable price. This paper is in four parts:Claims, reviewing a typical claim of storage system reliability, showing that it provides no useful information for bit preservation purposes.Theory, proposing “bit half-life” as an initial, if inadequate, measure of bit preservation performance, expressing bit preservation requirements in terms of it, and showing that the requirements being placed on bit preservation systems are so onerous that the experiments required to prove that a solution exists are not feasible.Practice, reviewing recent research into how well actual storage systems preserve bits, showing that they fail to meet the requirements by many orders of magnitude.Policy, suggesting ways of dealing with this unfortunate situation.
Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams
Directory of Open Access Journals (Sweden)
Zeng Bing
2006-01-01
Full Text Available This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream, our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth and high robustness (under varying and/or unclean channel conditions.
Quality Improvement Method using Double Error Correction in Burst Transmission Systems
Tsuchiya, Naosuke; Tomiyama, Shigenori; Tanaka, Kimio
Recently, it has a tendency to reduce an error correction and flow control in order to realize a high speed transmission in a burst transmission systems such as ATM network, IP (Internet Protocol) network, frame relay and so on. Therefore a degradation of network quality, an information loss caused by buffer overflow and decrease of average bit error rate, are occurred, especially for high speed information such as high definition television signals, it is necessary to improve these degradations. This paper proposes one of the typical reconstruction methods of lost information and an improvement of average bit error rate. In order to analyse the degradation phenomena, the Gilbert model is introduced for burst errors and the Fluid flow model for buffer overflow. This method is applied to ATM network which mainly transmit a video signals and it makes clear that proposed method is useful for high speed transmission.
Testing of Error-Correcting Sparse Permutation Channel Codes
Shcheglov, Kirill, V.; Orlov, Sergei S.
2008-01-01
A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.
Unequal Error Protection for Compressed Video over Noisy Channels
Vosoughi, Arash
2015-01-01
The huge amount of data embodied in a video signal is by far the biggest burden on existing wireless communication systems. Adopting an efficient video transmission strategy is thus crucial in order to deliver video data at the lowest bit rate and the highest quality possible. Unequal error protection (UEP) is a powerful tool in this regard, whose ultimate goal is to wisely provide a stronger protection for the more important data, and a weaker protection for the less important data carried b...
Directory of Open Access Journals (Sweden)
Philip J Kellman
Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert
String bit models for superstring
International Nuclear Information System (INIS)
The authors extend the model of string as a polymer of string bits to the case of superstring. They mainly concentrate on type II-B superstring, with some discussion of the obstacles presented by not II-B superstring, together with possible strategies for surmounting them. As with previous work on bosonic string work within the light-cone gauge. The bit model possesses a good deal less symmetry than the continuous string theory. For one thing, the bit model is formulated as a Galilei invariant theory in (D - 2) + 1 dimensional space-time. This means that Poincare invariance is reduced to the Galilei subgroup in D - 2 space dimensions. Naturally the supersymmetry present in the bit model is likewise dramatically reduced. Continuous string can arise in the bit models with the formation of infinitely long polymers of string bits. Under the right circumstances (at the critical dimension) these polymers can behave as string moving in D dimensional space-time enjoying the full N = 2 Poincare supersymmetric dynamics of type II-B superstring
String bit models for superstring
Energy Technology Data Exchange (ETDEWEB)
Bergman, O.; Thorn, C.B.
1995-12-31
The authors extend the model of string as a polymer of string bits to the case of superstring. They mainly concentrate on type II-B superstring, with some discussion of the obstacles presented by not II-B superstring, together with possible strategies for surmounting them. As with previous work on bosonic string work within the light-cone gauge. The bit model possesses a good deal less symmetry than the continuous string theory. For one thing, the bit model is formulated as a Galilei invariant theory in (D {minus} 2) + 1 dimensional space-time. This means that Poincare invariance is reduced to the Galilei subgroup in D {minus} 2 space dimensions. Naturally the supersymmetry present in the bit model is likewise dramatically reduced. Continuous string can arise in the bit models with the formation of infinitely long polymers of string bits. Under the right circumstances (at the critical dimension) these polymers can behave as string moving in D dimensional space-time enjoying the full N = 2 Poincare supersymmetric dynamics of type II-B superstring.
AN ERROR-RESILIENT H.263+ CODING SCHEME FOR VIDEO TRANSMISSION OVER WIRELESS NETWORKS
Institute of Scientific and Technical Information of China (English)
Li Jian; Bie Hongxia
2006-01-01
Video transmission over wireless networks has received much attention recently for its restricted bandwidth and high bit-error rate. Based on H.263+, by reversing part stream sequences of each Group Of Block (GOB), an error resilient scheme is presented to improve video robustness without additional bandwidth burden. Error patterns are employed to simulate Wideband Code Division Multiple Access(WCDMA) channels to check out error resilience performances. Simulation results show that both subjective and objective qualities of the reconstructed images are improved remarkably. The mean Peak Signal to Noise Ratio (PSNR)is increased by 0.5dB, and the highest increment is 2dB.
Hash Based Least Significant Bit Technique For Video Steganography
Directory of Open Access Journals (Sweden)
Prof. Dr. P. R. Deshmukh ,
2014-01-01
Full Text Available The Hash Based Least Significant Bit Technique For Video Steganography deals with hiding secret message or information within a video.Steganography is nothing but the covered writing it includes process that conceals information within other data and also conceals the fact that a secret message is being sent.Steganography is the art of secret communication or the science of invisible communication. In this paper a Hash based least significant bit technique for video steganography has been proposed whose main goal is to embed a secret information in a particular video file and then extract it using a stego key or password. In this Least Significant Bit insertion method is used for steganography so as to embed data in cover video with change in the lower bit.This LSB insertion is not visible.Data hidding is the process of embedding information in a video without changing its perceptual quality. The proposed method involve with two terms that are Peak Signal to Noise Ratio (PSNR and the Mean Square Error (MSE .This two terms measured between the original video files and steganographic video files from all video frames where a distortion is measured using PSNR. A hash function is used to select the particular position for insertion of bits of secret message in LSB bits.
Bakker, Marjan; Wicherts, Jelte M
2014-09-01
In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PMID:24773354
Silicon chip based wavelength conversion of ultra-high repetition rate data signals
DEFF Research Database (Denmark)
Hu, Hao; Ji, Hua; Galili, Michael; Pu, Minhao; Mulvad, Hans Christian Hansen; Oxenløwe, Leif Katsuo; Yvind, Kresten; Hvam, Jørn Märcher; Jeppesen, Palle
2011-01-01
We report on all-optical wavelength conversion of 160, 320 and 640 Gbit/s line-rate data signals using four-wave mixing in a 3.6 mm long silicon waveguide. Bit error rate measurements validate the performance within FEC limits.......We report on all-optical wavelength conversion of 160, 320 and 640 Gbit/s line-rate data signals using four-wave mixing in a 3.6 mm long silicon waveguide. Bit error rate measurements validate the performance within FEC limits....
Directory of Open Access Journals (Sweden)
Ozlu Nagihan
2009-03-01
Full Text Available Abstract Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD, disk diffusion (DD, Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78% were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%. Vitek 2 produced eight minor errors(7.2%. BD Phoenix produced three major errors (2.8%. DD produced two very major errors (1.8% (slightly higher (0.3% than the acceptable limit and three major errors (2.7%. MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25% and 50 minor errors (44.6%. Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility
Mebratu, Derssie; Kegege, Obadiah; Shaw, Harry
2016-01-01
Digital signal transmits via a carrier wave, demodulates at a receiver and locates an ideal constellation position. However, a noise distortion, carrier leakage and phase noise divert an actual constellation position of a signal and locate to a new position. In order to assess a source of noise and carrier leakage, Bit Error Rate (BER) measurement technique is also used to evaluate the number of erroneous bit per bit transmitted signal. In addition, we present, Error Vector Magnitude (EVM), which measures an ideal and a new position, assesses a source of signal distortion, and evaluates a wireless communication system's performance with a single metric. Applying EVM technique, we also measure the performance of a User Services Subsystem Component Replacement (USSCR) modem. Furthermore, we propose EVM measurement technique in the Tracking and Data Relay Satellite system (TDRS) to measure and evaluate a channel impairment between a ground (transmitter) and the terminal (receiver) at White Sands Complex.
The effect of FEC on packet error performance in a VSAT network
Taylor, D. P.; Grossman, M.
Very small aperture earth terminal (VSAT) satellite systems are multiterminal satellite communications systems that usually transmit data employing a packet transmission format. Because of the small antenna size and low transmission powers, forward error correction (FEC) is almost universally employed - often using convolutional codes. This paper derives an approximate relationship between the bit-error-rate (BER) and the packet-error-rate (PER) in a convolutionally encoded packet transmission system. Comparisons are made to measured results for one particular system and the approximate relationship is seen to provide a good estimate of actual performance.
A bit serial sequential circuit
Hu, S.; Whitaker, S.
1990-01-01
Normally a sequential circuit with n state variables consists of n unique hardware realizations, one for each state variable. All variables are processed in parallel. This paper introduces a new sequential circuit architecture that allows the state variables to be realized in a serial manner using only one next state logic circuit. The action of processing the state variables in a serial manner has never been addressed before. This paper presents a general design procedure for circuit construction and initialization. Utilizing pass transistors to form the combinational next state forming logic in synchronous sequential machines, a bit serial state machine can be realized with a single NMOS pass transistor network connected to shift registers. The bit serial state machine occupies less area than other realizations which perform parallel operations. Moreover, the logical circuit of the bit serial state machine can be modified by simply changing the circuit input matrix to develop an adaptive state machine.
Chuanshi Brand Tri-cone Roller Bit
Institute of Scientific and Technical Information of China (English)
Chen Xilong; Shen Zhenzhong; Yuan Xiaoyi
1997-01-01
@@ Compared with other types of bits, the tri-cone roller bit has the advantages of excellent comprehensive performance, low price, wide usage range. It is free of formation limits. The tri-cone roller bit accounts for 90% of the total bits in use. The Chengdu Mechanical Works, as a major manufacturer of petroleum mechanical products and one of the four major tri-cone roller bit factories in China,has produced 120 types of bits in seven series and 19 sizes since 1967. The bits manufactured by the factory are not only sold to the domestic oilfields, but also exported to Japan, Thailand, Indonesia, the Philippines and the Middle East.
A 1.5 bit/s Pipelined Analog-to-Digital Converter Design with Independency of Capacitor Mismatch
Institute of Scientific and Technical Information of China (English)
LI Dan; RONG Men-tian; MAO Jun-fa
2007-01-01
A new technique which is named charge temporary storage technique (CTST) was presented to improve the linearity of a 1.5 bit/s pipelined analog-to-digital converter (ADC).The residual voltage was obtained from the sampling capacitor, and the other capacitor was just a temporary storage of charge.Then, the linearity produced by the mismatch of these capacitors was eliminated without adding extra capacitor error-averaging amplifiers.The simulation results confirmed the high linearity and low dissipation of pipelined ADCs implemented in CTST, so CTST was a new method to implement high resolution, small size ADCs.
Burgess, Ralph; Yang, Ziheng
2008-09-01
Estimation of population parameters for the common ancestors of humans and the great apes is important in understanding our evolutionary history. In particular, inference of population size for the human-chimpanzee common ancestor may shed light on the process by which the 2 species separated and on whether the human population experienced a severe size reduction in its early evolutionary history. In this study, the Bayesian method of ancestral inference of Rannala and Yang (2003. Bayes estimation of species divergence times and ancestral population sizes using DNA sequences from multiple loci. Genetics. 164:1645-1656) was extended to accommodate variable mutation rates among loci and random species-specific sequencing errors. The model was applied to analyze a genome-wide data set of approximately 15,000 neutral loci (7.4 Mb) aligned for human, chimpanzee, gorilla, orangutan, and macaque. We obtained robust and precise estimates for effective population sizes along the hominoid lineage extending back approximately 30 Myr to the cercopithecoid divergence. The results showed that ancestral populations were 5-10 times larger than modern humans along the entire hominoid lineage. The estimates were robust to the priors used and to model assumptions about recombination. The unusually low X chromosome divergence between human and chimpanzee could not be explained by variation in the male mutation bias or by current models of hybridization and introgression. Instead, our parameter estimates were consistent with a simple instantaneous process for human-chimpanzee speciation but showed a major reduction in X chromosome effective population size peculiar to the human-chimpanzee common ancestor, possibly due to selective sweeps on the X prior to separation of the 2 species. PMID:18603620
An analysis of the impact of data errors on backorder rates in the F404 engine system
Burson, Patrick A. R.
2003-01-01
Approved for public release; distribution in unlimited. In the management of the U.S. Naval inventory, data quality is of critical importance. Errors in major inventory databases contribute to increased operational costs, reduced revenue, and loss of confidence in the reliability of the supply system. Maintaining error-free databases is not a realistic objective. Data-quality efforts must be prioritized to ensure that limited resources are allocated to achieve the maximum benefit. Thi...
... Here's Help White House Lunch Recipes Hey! A Mosquito Bit Me! KidsHealth > For Kids > Hey! A Mosquito ... español ¡Ay! ¡Me picó un mosquito! What's a Mosquito? A mosquito (say: mus-KEE-toe) is an ...
... Snowboarding, Skating Crushes What's a Booger? Hey! A Tarantula Bit Me! KidsHealth > For Kids > Hey! A Tarantula ... español ¡Ay! ¡Me picó una tarántula! What's a Tarantula? A tarantula is a hairy spider that is ...
Yang, S. -R. Eric; Schliemann, John; MacDonald, A. H.
2002-01-01
Bilayer quantum Hall systems can form collective states in which electrons exhibit spontaneous interlayer phase coherence. We discuss the possibility of using bilayer quantum dot many-electron states with this property to create two-level systems that have potential advantages as quantum bits.
Head and bit patterned media optimization at areal densities of 2.5 Tbit/in2 and beyond
International Nuclear Information System (INIS)
Global optimization of writing head is performed using micromagnetics and surrogate optimization. The shape of the pole tip is optimized for bit patterned, exchange spring recording media. The media characteristics define the effective write field and the threshold values for the head field that acts at islands in the adjacent track. Once the required head field characteristics are defined, the pole tip geometry is optimized in order to achieve a high gradient of the effective write field while keeping the write field at the adjacent track below a given value. We computed the write error rate and the adjacent track erasure for different maximum anisotropy in the multilayer, graded media. The results show a linear trade off between the error rate and the number of passes before erasure. For optimal head media combinations we found a bit error rate of 10-6 with 108 pass lines before erasure at 2.5 Tbit/in2. - Research Highlights: → Global optimization of writing head is performed using micromagnetics and surrogate optimization. → A method is provided to optimize the pole tip shape while maintaining the head field that acts in the adjacent tracks. → Patterned media structures providing an area density of 2.5 Tbit/in2 are discussed as a case study. → Media reliability is studied, while taking into account, the magnetostatic field interactions from neighbouring islands and adjacent track erasure under the influence of head field.
GOP-Level Bit Allocation Using Reverse Dynamic Programming
Institute of Scientific and Technical Information of China (English)
LU Yang; XIE Jun; LI Hang; CUI Huijuan
2009-01-01
An efficient adaptive group of pictures (GOP)-Ievel bit allocation algorithm was developed based on reverse dynamic programming (RDP). The algorithm gives the initial delay and sequence distortion curve with just one iteration of the algorithm. A simple GOP-level rate and distortion model was then developed for two-level constant quality rate control. The initial delay values and the corresponding optimal GOP-level bit allocation scheme can be obtained for video streaming along with the proper initial delay for various distortion tolerance levels. Simulations show that the algorithm provides an efficient solution for delay and buffer con-strained GOP-level rate control for video streaming.
Directory of Open Access Journals (Sweden)
Sharmila Vaz
Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.
7-bit meta-transliterations for 8-bit romanizations
Lagally, Klaus
1997-01-01
We propose a general strategy for deriving 7-bit encodings for texts in languages which use an alphabetic non-Roman script, like Arabic, Persian, Sanskrit and many other Indic scripts, and for which there is some transliteration convention using Roman letters with additional diacritical marks. These schemes, which we will call 'meta-transliterations', are based on using single ASCII letters for representing Roman letters, and digraphs consisting of a suitable punctuation character and an ASCI...
Parity Bit Replenishment for JPEG 2000-Based Video Streaming
Directory of Open Access Journals (Sweden)
François-Olivier Devaux
2009-01-01
Full Text Available This paper envisions coding with side information to design a highly scalable video codec. To achieve fine-grained scalability in terms of resolution, quality, and spatial access as well as temporal access to individual frames, the JPEG 2000 coding algorithm has been considered as the reference algorithm to encode INTRA information, and coding with side information has been envisioned to refresh the blocks that change between two consecutive images of a video sequence. One advantage of coding with side information compared to conventional closed-loop hybrid video coding schemes lies in the fact that parity bits are designed to correct stochastic errors and not to encode deterministic prediction errors. This enables the codec to support some desynchronization between the encoder and the decoder, which is particularly helpful to adapt on the fly pre-encoded content to fluctuating network resources and/or user preferences in terms of regions of interest. Regarding the coding scheme itself, to preserve both quality scalability and compliance to the JPEG 2000 wavelet representation, a particular attention has been devoted to the definition of a practical coding framework able to exploit not only the temporal but also spatial correlation among wavelet subbands coefficients, while computing the parity bits on subsets of wavelet bit-planes. Simulations have shown that compared to pure INTRA-based conditional replenishment solutions, the addition of the parity bits option decreases the transmission cost in terms of bandwidth, while preserving access flexibility.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
A holistic approach to bit preservation
DEFF Research Database (Denmark)
Zierau, Eld
2012-01-01
Purpose: The purpose of this paper is to point out the importance of taking a holistic approach to bit preservation when setting out to find an optimal bit preservation solution for specific digital materials. In the last decade there has been an increasing awareness that bit preservation, which is...... costs are taken into account. Design/methodology/approach: The paper describes the various findings from previous research which have led to the holistic approach to bit preservation. This paper also includes an introduction to digital preservation with a focus on the role of bit preservation, which...... to do bit preservation of its digital material....
Directory of Open Access Journals (Sweden)
Tao Sheng
2009-11-01
Full Text Available The multipath fading and shading of the wireless networks usually lead to the loss or error of video packets which results in significant video quality degradation. Existing approaches with forward error correction (FEC or error concealment are unable to provide the desired robustness in video transmission. In this work, we develop a novel motion-based Wyner-Ziv coding (MWZC scheme by leveraging distributed source coding (DSC ideas for error robustness. The MWZC scheme is based on the fact that motion regions of a given video frame are particularly important to both objective and perceptual video quality and hence should be given preferential Wyner-Ziv coding based embedded protection. To achieve high coding efficiency, we determine the underlining motion regions based on a rate-distortion model. Within the framework of H.264/AVC specification, motion region determination can be efficiently implemented using Flexible Macroblock Ordering (FMO and Data Partitioning (DP. The bit stream consists of two parts: the systematic portion generated from conventional H.264/AVC bit stream and the supplementary bit stream generated by the proposed feedback free rate allocation algorithm for Wyner-Ziv coding of motion regions. Experimental results demonstrate that the proposed scheme significantly outperforms both decoder-based error concealment (DBEC and conventional FEC with DBEC approaches.
Li, Ping
2009-01-01
This paper establishes the theoretical framework of b-bit minwise hashing. The original minwise hashing method has become a standard technique for estimating set similarity (e.g., resemblance) with applications in information retrieval, data management, social networks and computational advertising. By only storing the lowest $b$ bits of each (minwise) hashed value (e.g., b=1 or 2), one can gain substantial advantages in terms of computational efficiency and storage space. We prove the basic theoretical results and provide an unbiased estimator of the resemblance for any b. We demonstrate that, even in the least favorable scenario, using b=1 may reduce the storage space at least by a factor of 21.3 (or 10.7) compared to using b=64 (or b=32), if one is interested in resemblance > 0.5.
Multi-bit quantum random number generation by measuring positions of arrival photons
Energy Technology Data Exchange (ETDEWEB)
Yan, Qiurong, E-mail: yanqiurong@ncu.edu.cn [Department of Electronics Information Engineering, Nanchang University, Nanchang 330031 (China); State Key Laboratory of Transient Optics and Photonics, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Zhao, Baosheng [State Key Laboratory of Transient Optics and Photonics, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Liao, Qinghong; Zhou, Nanrun [Department of Electronics Information Engineering, Nanchang University, Nanchang 330031 (China)
2014-10-15
We report upon the realization of a novel multi-bit optical quantum random number generator by continuously measuring the arrival positions of photon emitted from a LED using MCP-based WSA photon counting imaging detector. A spatial encoding method is proposed to extract multi-bits random number from the position coordinates of each detected photon. The randomness of bits sequence relies on the intrinsic randomness of the quantum physical processes of photonic emission and subsequent photoelectric conversion. A prototype has been built and the random bit generation rate could reach 8 Mbit/s, with random bit generation efficiency of 16 bits per detected photon. FPGA implementation of Huffman coding is proposed to reduce the bias of raw extracted random bits. The random numbers passed all tests for physical random number generator.
Multi-bit quantum random number generation by measuring positions of arrival photons
Yan, Qiurong; Zhao, Baosheng; Liao, Qinghong; Zhou, Nanrun
2014-10-01
We report upon the realization of a novel multi-bit optical quantum random number generator by continuously measuring the arrival positions of photon emitted from a LED using MCP-based WSA photon counting imaging detector. A spatial encoding method is proposed to extract multi-bits random number from the position coordinates of each detected photon. The randomness of bits sequence relies on the intrinsic randomness of the quantum physical processes of photonic emission and subsequent photoelectric conversion. A prototype has been built and the random bit generation rate could reach 8 Mbit/s, with random bit generation efficiency of 16 bits per detected photon. FPGA implementation of Huffman coding is proposed to reduce the bias of raw extracted random bits. The random numbers passed all tests for physical random number generator.
Beamforming under Quantization Errors in Wireless Binaural Hearing Aids
Directory of Open Access Journals (Sweden)
Kees Janse
2008-09-01
Full Text Available Improving the intelligibility of speech in different environments is one of the main objectives of hearing aid signal processing algorithms. Hearing aids typically employ beamforming techniques using multiple microphones for this task. In this paper, we discuss a binaural beamforming scheme that uses signals from the hearing aids worn on both the left and right ears. Specifically, we analyze the effect of a low bit rate wireless communication link between the left and right hearing aids on the performance of the beamformer. The scheme is comprised of a generalized sidelobe canceller (GSC that has two inputs: observations from one ear, and quantized observations from the other ear, and whose output is an estimate of the desired signal. We analyze the performance of this scheme in the presence of a localized interferer as a function of the communication bit rate using the resultant mean-squared error as the signal distortion measure.
Beamforming under Quantization Errors in Wireless Binaural Hearing Aids
Directory of Open Access Journals (Sweden)
Srinivasan Sriram
2008-01-01
Full Text Available Improving the intelligibility of speech in different environments is one of the main objectives of hearing aid signal processing algorithms. Hearing aids typically employ beamforming techniques using multiple microphones for this task. In this paper, we discuss a binaural beamforming scheme that uses signals from the hearing aids worn on both the left and right ears. Specifically, we analyze the effect of a low bit rate wireless communication link between the left and right hearing aids on the performance of the beamformer. The scheme is comprised of a generalized sidelobe canceller (GSC that has two inputs: observations from one ear, and quantized observations from the other ear, and whose output is an estimate of the desired signal. We analyze the performance of this scheme in the presence of a localized interferer as a function of the communication bit rate using the resultant mean-squared error as the signal distortion measure.
International Nuclear Information System (INIS)
A new approach to realizing the Pound–Drever–Hall (PDH) error signal offset correction is presented. The proposed setup and correction procedure allow one to control not only the effect of amplitude modulation of the error signal, but also other sources of offsets that are present in the PDH feedback loop. This technique significantly improves laser frequency locking in high-repetition-rate cavity ring-down spectroscopy (CRDS) by allowing one to recover a tight PDH lock within 1 ms after switching off the probe laser beam. We apply the PDH error signal offset correction to CRDS measurements of the weak 16O2 B-band R7 Q8 line. The resulting spectra taken at a pressure of 1.2 kPa had a signal-to-noise ratio of ∼8000:1
Performance Analysis of MC-CDMA in the Presence of Carriers Phase Errors
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
This paper presents the effect of carriers phase error on MC-CDMA performance in downlink mobile communications. Signal-to-Noise Ratio (SNR) and Bit-Error-Rate (BER) are analyzed taking into account the effect of carrier phase errors. It is shown that the MC-CDMA system is very sensitive to a carrier frequency offset, the system performance rapidly degrades and strongly depends on the number of carriers. For a maximal load, the degradation caused by carrier phase jitter is independent of the number of the carriers.
Tao Lyu; Suying Yao; Kaiming Nie; Jiangtao Xu
2014-01-01
A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the s...
Improved Energy Efficiency for Optical Transport Networks by Elastic Forward Error Correction
DEFF Research Database (Denmark)
Rasmussen, Anders; Yankov, Metodi Plamenov; Berger, Michael Stübert; Larsen, Knud J.; Ruepp, Sarah Renée
2014-01-01
In this paper we propose a scheme for reducing the energy consumption of optical links by means of adaptive forward error correction (FEC). The scheme works by performing on the fly adjustments to the code rate of the FEC, adding extra parity bits to the data stream whenever extra capacity is...... the balance between effective data rate and FEC coding gain without any disruption to the live traffic. As a consequence, these automatic adjustments can be performed very often based on the current traffic demand and bit error rate performance of the links through the network. The FEC scheme itself...... is designed to work as a transparent add-on to transceivers running the optical transport network (OTN) protocol, adding an extra layer of elastic soft-decision FEC to the built-in hard-decision FEC implemented in OTN, while retaining interoperability with existing OTN equipment. In order to...
Flexible Bit Preservation on a National Basis
DEFF Research Database (Denmark)
Jurik, Bolette; Nielsen, Anders Bo; Zierau, Eld
2012-01-01
In this paper we present the results from The Danish National Bit Repository project. The project aim was establishment of a system that can offer flexible and sustainable bit preservation solutions to Danish cultural heritage institutions. Here the bit preservation solutions must include support...... of bit safety as well as other requirements like e.g. confidentiality and availability. The Danish National Bit Repository is motivated by the need to investigate and handle bit preservation for digital cultural heritage. Digital preservation relies on the integrity of the bits which digital material...... consists of, and it is with this focus that the project was initiated. This paper summarizes the requirements for a general system to offer bit preservation to cultural heritage institutions. On this basis the paper describes the resulting flexible system which can support such requirements. The paper will...
Zhou, Qing F.; Mow, Wai Ho; Zhang, Shengli; Toumpakaris, Dimitris
2012-01-01
Motivated by applications such as battery-operated wireless sensor networks (WSN), we propose an easy-to-implement energy-efficient two-way relaying scheme. In particular, we address the challenge of improving the standard two-way selective decode-and-forward protocol (TW-SDF) in terms of block-error-rate (BLER) with minor additional complexity and energy consumption. By following the principle of soft relaying, our solution is the two-way one-bit soft forwarding (TW-1bSF) protocol in which t...
Perceptual importance analysis for H.264/AVC bit allocation
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
The existing H.264/AVC rate control schemes rarely include the perceptual considerations. As a result, the improvements in visual quality are hardly comparable to those in peak signal-to-noise ratio (PSNR). In this paper, we propose a perceptual importance analysis scheme to accurately abstract the spatial and temporal perceptual characteristics of video contents. Then we perform bit allocation at macroblock (MB) level by adopting a perceptual mode decision scheme, which adaptively updates the Lagrangian multiplier for mode decision according to the perceptual importance of each MB. Simulation results show that the proposed scheme can efficiently reduce bit rates without visual quality degradation.
Development of a jet-assisted polycrystalline diamond drill bit
Energy Technology Data Exchange (ETDEWEB)
Pixton, D.S.; Hall, D.R.; Summers, D.A.; Gertsch, R.E.
1997-12-31
A preliminary investigation has been conducted to evaluate the technical feasibility and potential economic benefits of a new type of drill bit. This bit transmits both rotary and percussive drilling forces to the rock face, and augments this cutting action with high-pressure mud jets. Both the percussive drilling forces and the mud jets are generated down-hole by a mud-actuated hammer. Initial laboratory studies show that rate of penetration increases on the order of a factor of two over unaugmented rotary and/or percussive drilling rates are possible with jet-assistance.
A perceptual optimization of H.264/AVC bit allocation at the frame and macroblock levels
Hrarti, M.; Saadane, H.; Larabi, M.-C.; Tamtaoui, A.; Aboutajdine, D.
2012-01-01
In H.264/AVC rate control algorithm, the bit allocation process and the QP determination are not optimal. At frame layer, there is an implicit assumption considering that the video sequence is more or less stationary and consequently the neighbouring frames have similar characteristics. So, the target Bit-Rate for each frame is estimated using a straightforward process that allocates an equal bit budget for each frame regardless of its temporal and spatial complexities. This uniform allocation is surely not suited especially for all types of video sequences. The target bits determination at macroblock layer uses the MAD (Mean Absolute Difference) ratio as a complexity measure in order to promote interesting macroblocks, but this measure remains inefficient in handling macroblock characteristics. In a previous work we have proposed Rate-Quantization (R-Q) models for Intra and Inter frames used to deal with the QP determination shortcoming. In this paper, we look to overcome the limitation of the bit allocation process at the frame and the macroblock layers. At the frame level, we enhance the bit allocation process by exploiting frame complexity measures. Thereby, the target bit determination for P-frames is adjusted by combining two temporal measures: The first one is a motion ratio determined from actual bits used to encode previous frames. The second measure exploits both the difference between two consecutive frames and the histogram of this difference. At macroblock level, the visual saliency is used in the bit allocation process. The basic idea is to promote salient macroblocks. Hence, a saliency map, based on a Bottom-Up approach, is generated and a macroblock classification is performed. This classification is then used to accurately adjust UBitsH264 which represents the usual bit budget estimated by H.264/AVC bit allocation process. For salient macroblocks the adjustment leads to a bit budget which is always larger than UBitsH264. The extra bits added to
Dispersion Tolerance of 40 Gbaud Multilevel Modulation Formats with up to 3 bits per Symbol
DEFF Research Database (Denmark)
Jensen, Jesper Bevensee; Tokle, Torger; Geng, Yan; Jeppesen, Palle; Serbay, M.; Rosenkranz, W.
2006-01-01
We present numerical and experimental investigations of dispersion tolerance for multilevel phase- and amplitude modulation with up to 3 bits per symbol at a symbol rate of 40 Gbaud......We present numerical and experimental investigations of dispersion tolerance for multilevel phase- and amplitude modulation with up to 3 bits per symbol at a symbol rate of 40 Gbaud...
A Generalized Write Channel Model for Bit-Patterned Media Recording
Naseri, Sima; Yazdani, Somaie; Razeghi, Behrooz; Hodtani, Ghosheh Abed
2014-01-01
In this paper, we propose a generalized write channel model for bit-patterned media recording by considering all sources of errors causing some extra disturbances during write process, in addition to data dependent write synchronization errors. We investigate information-theoretic bounds for this new model according to various input distributions and also compare it numerically to the last proposed model.
Institute of Scientific and Technical Information of China (English)
王立夫; 孙凤娟
2012-01-01
介绍了一种融短波通信与电离层斜向探测于一体的联合试验平台，该平台信道探测与通信同时进行，共用一套硬件设备，克服了设备不匹配及探测信道参量失效等问题，并基于该平台实录数据提取了通信误码率及信道特征参量包括信噪比、衰落深度、衰落率、多径扩展、各模式信号幅度、群距离、主模式相位、多普勒频移及多普勒扩展等，统计分析了各信道参量对通信误码率的影响，得出了一些有意义的结论。%A test platform combined HF communication with ionospheric oblique sounding is introduced, with which the ionosphere channel sounding and communi- cation is carried out synchronously using the same hardware equipment. By this way, the problem of equipment mismatch and no real-time channel parameters could be solved. Based on the experimental data measured by this plat{orm the communication bit error ratio（BER） and the channel characteristic parameters, in- cluding signal to noise ratio （SNR）, fading depth, fading rate, mulitipath spread, signal strength, group distance, the phase of major-mode, Doppler shift and Doppler spread, are extracted. The impact of the channel characteristic parameters on the communication BER is statistically analyzed. Significant conclusions are pro- posed in the end of this paper.
Design and realization of a high-speed 12-bit pipelined analog-digital converter IP block
Toprak, Zeynep
2001-01-01
This thesis presents the design, verification, system integration and the physical realization of a monolithic high-speed analog-digital converter (ADC) with 12-bit accuracy. The architecture of the ADC has been realized as a pipelined structure consisting of four pipeline stages, each of which is capable of processing the incoming analog signal with 4-bit accuracy. A bit-overlapping technique has been employed for digital error correction between the pipeline stages so that the influence of ...
Pightling, Arthur W.; Nicholas Petronella; Franco Pagotto
2014-01-01
The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance...
Josephson 32-bit shift register
International Nuclear Information System (INIS)
This paper reports on a 32-bit shift register designed by edge-triggered gates tested with ±25% bias margin and ±81% input margin for the full array. Simulations have shown ±55% bias margin at 3.3 GHz and working up to a maximum frequency of 30 GHz with a junction current density of 2000A/cm2 although the shift register has only been tested up to 500 MHz, limited by instrumentation. This edge-triggered gate consisting of a pair of conventional Josephson logic gates in series has the advantages of wide margins, short reset time, and insensitivity to global parameter-variations
DEFF Research Database (Denmark)
Sabra, Jakob Borrits
We mourn our dead, publicly and privately, online and offline. Cemeteries, web memorials and social network sites make up parts of todays intricately weaved and interrelated network of death, grief and memorialization practices [1]–[5]. Whether cut in stone or made of bits, graves, cemeteries...... such as space, artifacts, situations or sensuous representations. In this paper we build upon present research on grief-work and propose a methodological contribution to the study of progressions of digital mourning and remembrance practices [6]–[8]. We present a generalized structure of online...
Bustamante, Dulce M.; Lord, Cynthia C.
2010-01-01
Infection rate is an estimate of the prevalence of arbovirus infection in a mosquito population. It is assumed that when infection rate increases, the risk of arbovirus transmission to humans and animals also increases. We examined some of the factors that can invalidate this assumption. First, we used a model to illustrate how the proportion of mosquitoes capable of virus transmission, or infectious, is not a constant fraction of the number of infected mosquitoes. Thus, infection rate is not...
Bit threads and holographic entanglement
Freedman, Michael
2016-01-01
The Ryu-Takayanagi (RT) formula relates the entanglement entropy of a region in a holographic theory to the area of a corresponding bulk minimal surface. Using the max flow-min cut principle, a theorem from network theory, we rewrite the RT formula in a way that does not make reference to the minimal surface. Instead, we invoke the notion of a "flow", defined as a divergenceless norm-bounded vector field, or equivalently a set of Planck-thickness "bit threads". The entanglement entropy of a boundary region is given by the maximum flux out of it of any flow, or equivalently the maximum number of bit threads that can emanate from it. The threads thus represent entanglement between points on the boundary, and naturally implement the holographic principle. As we explain, this new picture clarifies several conceptual puzzles surrounding the RT formula. We give flow-based proofs of strong subadditivity and related properties; unlike the ones based on minimal surfaces, these proofs correspond in a transparent manner...
Error Rates of M-PAM and M-QAM in Generalized Fading and Generalized Gaussian Noise Environments
Soury, Hamza
2013-07-01
This letter investigates the average symbol error probability (ASEP) of pulse amplitude modulation and quadrature amplitude modulation coherent signaling over flat fading channels subject to additive white generalized Gaussian noise. The new ASEP results are derived in a generic closed-form in terms of the Fox H function and the bivariate Fox H function for the extended generalized-K fading case. The utility of this new general closed-form is that it includes some special fading distributions, like the Generalized-K, Nakagami-m, and Rayleigh fading and special noise distributions such as Gaussian and Laplacian. Some of these special cases are also treated and are shown to yield simplified results.
A new diamond bit for extra-hard, compact and nonabrasive rock formation
Institute of Scientific and Technical Information of China (English)
王佳亮; 张绍和
2015-01-01
A new impregnated diamond bit was designed to solve the slipping problem when impregnated diamond bit was used for extra-hard, compact, and nonabrasive rock formation. Adding SiC grits into matrix, SiC grits can easily be exfoliated from the surface of the matrix due to weak holding-force with matrix, which made the surface non-smooth. ThreeФ36/24 mm laboratorial bits were manufactured to conduct a laboratory drilling test on zirconiacorundum refractory brick. The laboratory drilling test indicates that the abrasive resistance of the bit work layer is proportional to the SiC concentation. The higher the concentration, the weaker the abrasive resistance of matrix. The new impregnated diamond bit was applied to a mining area drilling construction in Jiangxi province, China. Field drilling application indicates that the ROP (rate of penetration) of the new bit is approximately two to three times that of the common bits. Compared with the common bits, the surface of the new bit has typical abrasive wear characteristics, and the metabolic rate of the diamond can be well matched to the wear rate of the matrix.
An Empirical Analysis of Requantization Errors for Recompressed JPEG Images
Directory of Open Access Journals (Sweden)
B.VINOTH KUMAR
2011-12-01
Full Text Available Images from sources like digital camera, internet and the like are in the JPEG format. There is a tremendous need for recompression of JPEG images in order to satisfy the space constraints and to transmit the images with limited bandwidth. Several techniques have been developed for recompressing the JPEG image in order to achieve low bit rate and to have good visual quality. In this paper, we concentrated on requantization method to achieve recompression. We have analyzed the occurrence of requantization errors empirically for Normal rounding technique. Based on our experience, we have proposed the Enhanced rounding technique for requantization of JPEG images. The resulting images are generally smaller in size and have improved perceptualimage quality over Normal rounding technique. We have compared the recompression results for standard benchmark 256x256 gray scale images against image quality measures such as image size, compression ratio,bits per pixel and Peak Signal to Noise Ratio (PSNR.
Generalized Punctured Convolutional Codes with Unequal Error Protection
Directory of Open Access Journals (Sweden)
Marcelo Eduardo Pellenz
2009-01-01
Full Text Available We conduct a code search restricted to the recently introduced class of generalized punctured convolutional codes (GPCCs to find good unequal error protection (UEP convolutional codes for a prescribed minimal trellis complexity. The trellis complexity is taken to be the number of symbols per information bit in the Ã¢Â€ÂœminimalÃ¢Â€Â trellis module for the code. The GPCC class has been shown to possess codes with good distance properties under this decoding complexity measure. New good UEP convolutional codes and their respective effective free distances are tabulated for a variety of code rates and Ã¢Â€ÂœminimalÃ¢Â€Â trellis complexities. These codes can be used in several applications that require different levels of protection for their bits, such as the hierarchical digital transmission of video or images.
The application of iterative equalisation to high data rate wireless personal area networks
Lillie, AG; Nix, AR; Fletcher, PN; McGeehan, JP
2002-01-01
There is increasing demand for broadband wireless personal area networking devices, mainly fuelled by mobile multimedia applications such as wireless home networks. This paper investigates the suitability of iterative equalisation as a means of achieving low bit and packet error rates in a future high data rate personal area network standard. Baseband simulation results demonstrate the powerful ISI mitigation and error correcting performance of such receivers when operating in representative ...
Fully photonics-based physical random bit generator.
Li, Pu; Sun, Yuanyuan; Liu, Xianglian; Yi, Xiaogang; Zhang, Jianguo; Guo, Xiaomin; Guo, Yanqiang; Wang, Yuncai
2016-07-15
We propose a fully photonics-based approach for ultrafast physical random bit generation. This approach exploits a compact nonlinear loop mirror (called a terahertz optical asymmetric demultiplexer, TOAD) to sample the chaotic optical waveform in an all-optical domain and then generate random bit streams through further comparison with a threshold level. This method can efficiently overcome the electronic jitter bottleneck confronted by existing RBGs in practice. A proof-of-concept experiment demonstrates that this method can continuously extract 5 Gb/s random bit streams from the chaotic output of a distributed feedback laser diode (DFB-LD) with optical feedback. This limited generation rate is caused by the bandwidth of the used optical chaos. PMID:27420532
MODELLING AND SIMULATION OF 128-BIT CROSSBAR SWITCH FOR NETWORK -ONCHIP
Directory of Open Access Journals (Sweden)
Mohammad Ayoub Khan
2011-09-01
Full Text Available This is widely accepted that Network-on-Chip represents a promising solution for forthcoming complex embedded systems. The current SoC Solutions are built from heterogeneous hardware and Software components integrated around a complex communication infrastructure. The crossbar is a vital component of in any NoC router. In this work, we have designed a crossbar interconnect for serial bit data transfer and 128-parallel bit data transfer. We have shown comparison between power and delay for the serial bit and parallel bit data transfer through crossbar switch. The design is implemented in 0.180 micron TSMC technology.The bit rate achieved in serial transfer is slow as compared with parallel data transfer. The simulation results show that the critical path delay is less for parallel bit data transfer but power dissipation is high.
Energy Technology Data Exchange (ETDEWEB)
Beddo, M.E.; Spinka, H.; Underwood, D.G.
1992-08-14
Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.
Energy Technology Data Exchange (ETDEWEB)
Wu, Kesheng
2007-08-02
An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. The compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.
Reconfigurable random bit storage using polymer-dispersed liquid crystal
Horstmeyer, Roarke; Yang, Changhuei
2014-01-01
We present an optical method of storing random cryptographic keys, at high densities, within an electronically reconfigurable volume of polymer-dispersed liquid crystal (PDLC) film. We demonstrate how temporary application of a voltage above PDLC's saturation threshold can completely randomize (i.e., decorrelate) its optical scattering potential in less than a second. A unique optical setup is built around this resettable PDLC film to non-electronically save many random cryptographic bits, with minimal error, over a period of one day. These random bits, stored at an unprecedented density (10 Gb per cubic millimeter), can then be erased and transformed into a new random key space in less than one second. Cryptographic applications of such a volumetric memory device include use as a crypto-currency wallet and as a source of resettable "fingerprints" for time-sensitive authentication.
Algorithm of 32-bit Data Transmission Among Microcontrollers Through an 8-bit Port
Midriem Mirdanies; Hendri Maja Saputra; Estiko Rijanto
2015-01-01
This paper proposes an algorithm for 32-bit data transmission among microcontrollers through one 8-bit port. This method was motivated by a need to overcome limitations of microcontroller I/O as well as to fulfill the requirement of data transmission which is more than 10 bits. In this paper, the use of an 8-bit port has been optimized for 32-bit data transmission using unsigned long integer, long integer, and float types. Thirty-two bit data is extracted intobinary number, then sent through ...
CAMAC based 4-channel 12-bit digitizer
International Nuclear Information System (INIS)
With the development in Fusion research a large number of diagnostics are being used to understand the complex behaviour of plasma. During discharge, several diagnostics demand high sampling rate and high bit resolution to acquire data for rapid changes in plasma parameters. For the requirements of such fast diagnostics, a 4-channel simultaneous sampling, high-speed, 12-bit CAMAC digitizer has been designed and developed which has several important features for application in CAMAC based nuclear instrumentation. The module has independent ADC per channel for simultaneous sampling and digitization, and 512 Ksamples RAM per channel for on-board storage. The digitizer has been designed for event based acquisition and the acquisition window gives post-trigger as well as pre-trigger (software selectable) data that is useful for analysis. It is a transient digitizer and can be operated either in pre/post trigger mode or in burst mode. The record mode and the active memory size are selected through software commands to satisfy the current application. The module can be used to acquire data at high sampling rate for short time discharge e.g. 512 ms at 1MSPS. The module can also be used for long time discharge at low sampling rate e.g. 512 seconds at 1KSPS. This paper describes the design of digitizer module, development of VHDL code for hardware logic, Graphical User Interface (GUI) and important features of module from application point of view. The digitizer has CPLD based hardware logic, which provides flexibility in configuring the module for different sampling rates and different pre/post trigger samples through GUI. The digitizer can be operated with either internal (for testing/acquisition) or external (synchronized acquisition) clock and trigger. The digitizer has differential inputs with bipolar input range ±5V and it is being used with sampling rate of 1 MSamples Per Second (MSPS) per channel but it also supports higher sampling rate up to 3MSPS per channel. A
Understanding BitTorrent Through Real Measurements
Mazurczyk, Wojciech; Kopiczko, Pawel
2011-01-01
In this paper the results of the BitTorrent measurement study are presented. Two sources of BitTorrent data were utilized: meta-data files that describe the content of resources shared by BitTorrent users and the logs of one of the currently most popular BitTorrent clients - {\\mu}Torrent. {\\mu}Torrent is founded upon a rather newly released UDP-based {\\mu}TP protocol that is claimed to be more efficient than TCP-based clients. Experimental data have been collected for fifteen days from the po...
Insecurity Of Imperfect Quantum Bit Seal
Chau, H. F.
2005-01-01
Quantum bit seal is a way to encode a classical bit quantum mechanically so that everyone can obtain non-zero information on the value of the bit. Moreover, such an attempt should have a high chance of being detected by an authorized verifier. Surely, a reader looks for a way to get the maximum amount of information on the sealed bit and at the same time to minimize her chance of being caught. And a verifier picks a sealing scheme that maximizes his chance of detecting any measurement of the ...
LENUS (Irish Health Repository)
Chadwick, Liam
2012-03-12
Health Care Failure Modes and Effects Analysis (HFMEA®) is an established tool for risk assessment in health care. A number of deficiencies have been identified in the method. A new method called Systems and Error Analysis Bundle for Health Care (SEABH) was developed to address these deficiencies. SEABH has been applied to a number of medical processes as part of its validation and testing. One of these, Low Dose Rate (LDR) prostate Brachytherapy is reported in this paper. The case study supported the validity of SEABH with respect to its capacity to address the weaknesses of (HFMEA®).
Directory of Open Access Journals (Sweden)
Rashid A. Fayadh
2014-01-01
Full Text Available When receiving high data rate in ultra-wideband (UWB technology, many users have experienced multiple-user interference and intersymbol interference in the multipath reception technique. Structures have been proposed for implementing rake receivers to enhance their capabilities by reducing the bit error probability (Pe, thereby providing better performances by indoor and outdoor multipath receivers. As a result, several rake structures have been proposed in the past to reduce the number of resolvable paths that must be estimated and combined. To achieve this aim, we suggest two maximal ratio combiners based on the pulse sign separation technique, such as the pulse sign separation selective combiner (PSS-SC and the pulse sign separation partial combiner (PSS-PC to reduce complexity with fewer fingers and to improve the system performance. In the combiners, a comparator was added to compare the positive quantity of positive pulses and negative quantities of negative pulses to decide whether the transmitted bit was 1 or 0. The Pe was driven by simulation for multipath environments for impulse radio time-hopping binary phase shift keying (TH-BPSK modulation, and the results were compared with those of conventional selective combiners (C-SCs and conventional partial combiners (C-PCs.
Pightling, Arthur W; Petronella, Nicholas; Pagotto, Franco
2014-01-01
The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should
Directory of Open Access Journals (Sweden)
Arthur W Pightling
Full Text Available The wide availability of whole-genome sequencing (WGS and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i depth of sequencing coverage, ii choice of reference-guided short-read sequence assembler, iii choice of reference genome, and iv whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT, using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming. We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers
Directory of Open Access Journals (Sweden)
G. Vaikundam
2015-04-01
Full Text Available Beamforming is a signal processing technique to focus the transmitted energy so that maximum energy is radiated in the intended destination and communication range is enhanced. Data rate improvement in Transmit beamforming can be achieved with adaptive modulation. Though modulation adaptation is possible under zero-mean phase error, it is difficult to adapt it under non-zero mean Gaussian distributed phase error conditions. Phase errors occur due to channel estimation inaccuracies, delay in estimation, sensor drift, quantized feedback etc resulting in increased outage probability and Bit error rate. Preprocessing of beamforming weights adjusted by Sample Mean Estimate (SME solves the problem of adaptive modulation. However, under large phase error variation, the SME method fails. Hence, in this paper, Population Mean Estimate (PME approach is proposed to resolve these drawbacks for a Rayleigh flat fading channel with White Gaussian Noise. To correct the population mean error if any, Least Mean Square correction algorithm is proposed and is tested up to 80% error in PME and the corrected error fall within 10% error. Simulation results for a distributed beamforming sensor array indicate that the proposed method performs better than the SME based existing methods under worst-case phase error distribution.
International Nuclear Information System (INIS)
A measure of the reliability of a transmission protocol is the likelihood that undetected errors in the transmitted data will occur. The author considers the effect of single bit errors on the error-detection mechanisms in the HDLC as defined in ISO Standard 3309. It is shown that the HDLC block synchronisation method is relatively vulnerable to the generation of undetected errors. Simple but effective methods of improvement within standard HDLC are to use fixed-length data bytes (e.g. of 8 bits), to give block length as part of the data, and to use a separate flag at the beginning and end of every block. (G.F.F.)
Directory of Open Access Journals (Sweden)
Juan Mario Torres Nova
2010-05-01
Full Text Available Gaussian minimum shift keying (GMSK and differential binary phase shift keying (DBPSK are two digital modulation schemes which are -frequently used in radio communication systems; however, there is interdependence in the use of its benefits (spectral efficiency, low bit error rate, low inter symbol interference, etc. Optimising one parameter creates problems for another; for example, the GMSK scheme succeeds in reducing bandwidth when introducing a Gaussian filter into an MSK (minimum shift ke-ying modulator in exchange for increasing inter-symbol interference in the system. The DBPSK scheme leads to lower error pro-bability, occupying more bandwidth; it likewise facilitates synchronous data transmission due to the receiver’s bit delay when re-covering a signal.
Factorization of a 512-bit RSA modulus
Cavallar, S.H.; Lioen, W.M.; Riele, H.J.J. te; Dodson, B.; Lenstra, A.K.; Montgomery, P.L.; Murphy, B.
2000-01-01
On August 22, 1999, we completed the factorization of the 512--bit 155--digit number RSA--155 with the help of the Number Field Sieve factoring method (NFS). This is a new record for factoring general numbers. Moreover, 512--bit RSA keys are frequently used for the protection of electronic commerce-
Noise, errors and information in quantum amplification
D'Ariano, G M; Maccone, L
1997-01-01
We analyze and compare the characterization of a quantum device in terms of noise, transmitted bit-error-rate (BER) and mutual information, showing how the noise description is meaningful only for Gaussian channels. After reviewing the description of a quantum communication channel, we study the insertion of an amplifier. We focus attention on the case of direct detection, where the linear amplifier has a 3 decibels noise figure, which is usually considered an unsurpassable limit, referred to as the standard quantum limit (SQL). Both noise and BER could be reduced using an ideal amplifier, which is feasible in principle. However, just a reduction of noise beyond the SQL does not generally correspond to an improvement of the BER or of the mutual information. This is the case of a laser amplifier, where saturation can greatly reduce the noise figure, although there is no corresponding improvement of the BER. Such mechanism is illustrated on the basis of Monte Carlo simulations.
Performance analyses of subcarrier BPSK modulation over M turbulence channels with pointing errors
Ma, Shuang; Li, Ya-tian; Wu, Jia-bin; Geng, Tian-wen; Wu, Zhiyong
2016-05-01
An aggregated channel model is achieved by fitting the Weibull distribution, which includes the effects of atmospheric attenuation, M distributed atmospheric turbulence and nonzero boresight pointing errors. With this approximate channel model, the bit error rate ( BER) and the ergodic capacity of free-space optical (FSO) communication systems utilizing subcarrier binary phase-shift keying (BPSK) modulation are analyzed, respectively. A closed-form expression of BER is derived by using the generalized Gauss-Lagueree quadrature rule, and the bounds of ergodic capacity are discussed. Monte Carlo simulation is provided to confirm the validity of the BER expressions and the bounds of ergodic capacity.
Rotary drill bit with rotary cutters
Energy Technology Data Exchange (ETDEWEB)
Brandenstein, M.; Ernst, H.M.; Kunkel, H.; Olschewski, A.; Walter, L.
1981-03-31
A rotary drill bit is described that has a drill bit body and at least one trunnion projecting from the drill bit body and a rotary cutter supported on at least one pair of radial rolling bearings on the trunnion. The rolling elements of at least one bearing are guided on at last one axial end facing the drill bit body in an outer bearing race groove incorporated in the bore of the rotary cutter. The inner bearing groove is formed on the trunnion for the rolling elements of the radial roller bearing. A filling opening is provided for assembly of the rolling elements comprising a channel which extends through the drill bit body and trunnion and is essentially axially oriented having one terminal end adjacent the inner bearing race groove and at least one filler piece for sealing the opening. The filling opening is arranged to provide a common filling means for each radial bearing.
Rotary drill bit with rotary cutter
Energy Technology Data Exchange (ETDEWEB)
Brandenstein, M.; Kunkel, H.; Olschewski, A.; Walter, L.
1981-03-17
A rotary drill bit having a drill bit body and at least one trunnion projecting from the drill bit body and a rotary cutter supported on at least one radial roller bearing on the trunnion. The rolling elements of the bearing are guided on at least one axial end facing the drill bit body in an outer bearing race groove incorporated in the bore of the rotary cutter. The inner bearing race groove is formed on the trunnion for the rolling elements of the radial roller bearing. At least one filling opening is provided which extends through the drill bit body and trunnion and is essentially axially oriented having one terminal end adjacent the inner bearing race groove and at least one filler piece for sealing the opening.
International Nuclear Information System (INIS)
We construct discrete space-time coordinates separated by the Lorentz-invariant intervals h/mc in space and h/mc2 in time using discrimination (XOR) between pairs of independently generated bit-strings; we prove that if this space is homogeneous and isotropic, it can have only 1, 2 or 3 spacial dimensions once we have related time to a global ordering operator. On this space we construct exact combinatorial expressions for free particle wave functions taking proper account of the interference between indistinguishable alternative paths created by the construction. Because the end-points of the paths are fixed, they specify completed processes; our wave functions are ''born collapsed''. A convenient way to represent this model is in terms of complex amplitudes whose squares give the probability for a particular set of observable processes to be completed. For distances much greater than h/mc and times much greater than h/mc2 our wave functions can be approximated by solutions of the free particle Dirac and Klein-Gordon equations. Using a eight-counter paradigm we relate this construction to scattering experiments involving four distinguishable particles, and indicate how this can be used to calculate electromagnetic and weak scattering processes. We derive a non-perturbative formula relating relativistic bound and resonant state energies to mass ratios and coupling constants, equivalent to our earlier derivation of the Bohr relativistic formula for hydrogen. Using the Fermi-Yang model of the pion as a relativistic bound state containing a nucleon-antinucleon pair, we find that (GπN2)2 = (2mN/mπ)2 - 1. 21 refs., 1 fig
PERBANDINGAN APLIKASI MENGGUNAKAN METODE CAMELLIA 128 BIT KEY DAN 256 BIT KEY
Directory of Open Access Journals (Sweden)
Lanny Sutanto
2014-01-01
Full Text Available The rapid development of the Internet today to easily exchange data. This leads to high levels of risk in the data piracy. One of the ways to secure data is using cryptography camellia. Camellia is known as a method that has the encryption and decryption time is fast. Camellia method has three kinds of scale key is 128 bit, 192 bit, and 256 bit.This application is created using the C++ programming language and using visual studio 2010 GUI. This research compare the smallest and largest key size used on the file extension .Txt, .Doc, .Docx, .Jpg, .Mp4, .Mkv and .Flv. This application is made to comparing time and level of security in the use of 128-bit key and 256 bits. The comparison is done by comparing the results of the security value of avalanche effect 128 bit key and 256 bit key.
Steganography forensics method for detecting least significant bit replacement attack
Wang, Xiaofeng; Wei, Chengcheng; Han, Xiao
2015-01-01
We present an image forensics method to detect least significant bit replacement steganography attack. The proposed method provides fine-grained forensics features by using the hierarchical structure that combines pixels correlation and bit-planes correlation. This is achieved via bit-plane decomposition and difference matrices between the least significant bit-plane and each one of the others. Generated forensics features provide the susceptibility (changeability) that will be drastically altered when the cover image is embedded with data to form a stego image. We developed a statistical model based on the forensics features and used least square support vector machine as a classifier to distinguish stego images from cover images. Experimental results show that the proposed method provides the following advantages. (1) The detection rate is noticeably higher than that of some existing methods. (2) It has the expected stability. (3) It is robust for content-preserving manipulations, such as JPEG compression, adding noise, filtering, etc. (4) The proposed method provides satisfactory generalization capability.
BitTorrent Request Message Models
Erman, David; Popescu, Adrian
2005-01-01
BitTorrent, a replicating Peer-to-Peer (P2P) file sharing system, has become extremely popular over the last years. According to Cachelogic, the BitTorrent traffic volume has increased from 26% to 52% of the total P2P traffic volume during the first half of 2004. This paper reports on new results obtained on modelling and analysis of BitTorrent traffic collected at Blekinge Institute of Technology (BTH) as well as a local Internet Service Provider (ISP). In particular, we report on new reques...
Reinforcement Learning in BitTorrent Systems
Izhak-Ratzin, Rafit; Park, Hyunggon; van der Schaar, Mihaela
2010-01-01
Recent research efforts have shown that the popular BitTorrent protocol does not provide fair resource reciprocation and may allow free-riding. In this paper, we propose a BitTorrent-like protocol that replaces the peer selection mechanisms in the regular BitTorrent protocol with a novel reinforcement learning (RL) based mechanism. Due to the inherent opration of P2P systems, which involves repeated interactions among peers over a long period of time, the peers can efficiently identify free-r...
Forecasting Full-Path Network Congestion Using One Bit Signalling
Woldeselasie, M.; Clegg, R. G.; Rio, M.
2013-01-01
In this paper, we propose a mechanism for packet marking called Probabilistic Congestion Notification (PCN). This scheme makes use of the 1-bit Explicit Congestion Notification (ECN) field in the Internet Protocol (IP) header. It allows the source to estimate the exact level of congestion at each intermediate queue. By knowing this, the source could take avoiding action either by adapting its sending rate or by using alternate routes. The estimation mechanism makes use of time series analysis...
Krone, Stefan; Fettweis, Gerhard
2013-01-01
1-bit analog-to-digital conversion is very attractive for low-complexity communications receivers. A major drawback is, however, the small spectral efficiency when sampling at symbol rate. This can be improved through oversampling by exploiting the signal distortion caused by the transmission channel. This paper analyzes the achievable data rate of band-limited communications channels that are subject to additive noise and inter-symbol-interference with 1-bit quantization and oversampling at ...
An efficient bit-loading algorithm with peak BER constraint for the band-extended PLC
Maiga, Ali; Baudais, Jean-Yves; Hélard, Jean-François
2009-01-01
ISBN: 978-1-4244-2936-3 International audience Powerline communications (PLC) have become a viable local area network (LAN) solution for in-home networks. In order to achieve high bit rate over powerline, the current technology bandwidth is increased up to 100 MHz within the European project OMEGA. In this paper, an efficient bit-loading algorithm with peak BER constraint is proposed. This algorithm tries to maximize the overall data rate based on linear precoded discrete multitone (LP-...
Adaptive Subcarrier and Bit Allocation for Downlink OFDMA System with Proportional Fairness
Directory of Open Access Journals (Sweden)
Sudhir B. Lande
2011-11-01
Full Text Available This paper investigates the adaptive subcarrier and bit allocation algorithm for OFDMA systems. To minimize overall transmitted power, we propose a novel adaptive subcarrier and bit allocation algorithm based on channel state information (CSI and quality state information (QSI. A suboptimal approach that separately performs subcarrier allocation and bit loading is proposed. It is shown that a near optimal solution is obtained by the proposed algorithm which has low complexity compared to that of other conventional algorithm. We will study the problem of finding an optimal sub-carrier and power allocation strategy for downlink communication to multiple users in an OFDMA based wireless system.Assuming knowledge of the instantaneous channel gains for all users, we propose a multiuser OFDMA subcarrier, and bit allocation algorithm to minimize the total transmit power. This is done by assigning each user a set of subcarriers and by determining the number of bits and the transmit power level for each subcarrier. The objective is to minimize the total transmitted power over the entire network to satisfy the application layer and physical layer. We formulate this problem as a constrained optimization problem and present centralized algorithms. The simulation results will show that our approach results in an efficient assignment of subcarriers and transmitter power levels in terms of the energy required for transmitting each bit of information, to address this need, we also present a bit loading algorithm for allocating subcarriers and bits in order to satisfy the rate requirements of the links.
FastBit: Interactively Searching Massive Data
Energy Technology Data Exchange (ETDEWEB)
Wu, Kesheng; Ahern, Sean; Bethel, E. Wes; Chen, Jacqueline; Childs, Hank; Cormier-Michel, Estelle; Geddes, Cameron; Gu, Junmin; Hagen, Hans; Hamann, Bernd; Koegler, Wendy; Lauret, Jerome; Meredith, Jeremy; Messmer, Peter; Otoo, Ekow; Perevoztchikov, Victor; Poskanzer, Arthur; Prabhat,; Rubel, Oliver; Shoshani, Arie; Sim, Alexander; Stockinger, Kurt; Weber, Gunther; Zhang, Wei-Ming
2009-06-23
As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.
A Simple Quantum Bit Commitment Protocol
Sheikholeslam, S Arash
2011-01-01
In this paper, we introduce a new quantum bit commitment method which is secure against entanglement attacks. Some cheating strategies are discussed and shown to be ineffective against the proposed method.
Curtis, Fred
2001-01-01
Existing planar map encodings neglect maps with loops. The presented scheme encodes any connected planar map in 4 bits/edge. Encoding and decoding time is O(edges). Implicit face/edge/vertex orderings and canonical encodings are discussed.
Factorization of a 768-bit RSA modulus
Kleinjung, T; Aoki, K.; Franke, J.; Lenstra, A.K.; Thomee, E; Bos, Joppe,; Gaudry, P.; Kruppa, Alexander; Montgomery, P. L.; Osvik, D.A.; Riele, te, H.; Timofeev, Andrey; Zimmermann, P; Rabin, T.
2010-01-01
The original publication is available at www.springerlink.com International audience This paper reports on the factorization of the 768-bit number RSA-768 by the number field sieve factoring method and discusses some implications for RSA.
Provably secure experimental bit string generation
International Nuclear Information System (INIS)
Full text: Coin tossing is a cryptographic primitive in which two parties which do not trust each other desire to flip a coin. This is impossible using only classical communication. Non trivial coin tossing is possible using quantum communication, but it is possible to show that when tossing a single coin the amount of randomness of the coin is strongly limited. We showed that, on the contrary, if the parties want to toss many coins, then using quantum communication they can achieve arbitrarily high levels of randomness. We call this bit string generation. Based on these results we realized an experimental implementation of bit string generation in which a string of bits is obtained which provably more random than could be achieved using classical communication. This is thus the first demonstration of a fundamental new concept: the possibility of generating random bits with an adversary which is limited only by the laws of physics. (author)
Error-Correcting Data Structures
de Wolf, Ronald
2008-01-01
We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This model is the common generalization of (static) data structures and locally decodable error-correcting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural error-correcting data structure problems. In particular, we show that the optimal length of error-correcting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable codes for s-bit strings.
BitTorrent's Mainline DHT Security Assessment
Timpanaro, Juan Pablo; Cholez, Thibault; Chrisment, Isabelle; Festor, Olivier
2011-01-01
BitTorrent is a widely deployed P2P file sharing protocol, extensively used to distribute digital content and software updates, among others. Recent actions against torrent and tracker repositories have fostered the move towards a fully distributed solution based on a distributed hash table to support both torrent search and tracker implementation. In this paper we present a security study of the main decentralized tracker in BitTorrent, commonly known as the Mainline DHT.We show that the lac...
Improved Design of Unequal Error Protection LDPC Codes
Directory of Open Access Journals (Sweden)
Sandberg Sara
2010-01-01
Full Text Available We propose an improved method for designing unequal error protection (UEP low-density parity-check (LDPC codes. The method is based on density evolution. The degree distribution with the best UEP properties is found, under the constraint that the threshold should not exceed the threshold of a non-UEP code plus some threshold offset. For different codeword lengths and different construction algorithms, we search for good threshold offsets for the UEP code design. The choice of the threshold offset is based on the average a posteriori variable node mutual information. Simulations reveal the counter intuitive result that the short-to-medium length codes designed with a suitable threshold offset all outperform the corresponding non-UEP codes in terms of average bit-error rate. The proposed codes are also compared to other UEP-LDPC codes found in the literature.
A single-ended 10-bit 200 kS/s 607 μW SAR ADC with an auto-zeroing offset cancellation technique
International Nuclear Information System (INIS)
This paper presents a single-ended 8-channel 10-bit 200 kS/s 607 μW synchronous successive approximation register (SAR) analog-to-digital converter (ADC) using HLMC 55 nm low leakage (LL) CMOS technology with a 3.3 V/1.2 V supply voltage. In conventional binary-encoded SAR ADCs the total capacitance grows exponentially with resolution. In this paper a CR hybrid DAC is adopted to reduce both capacitance and core area. The capacitor array resolves 4 bits and the other 6 bits are resolved by the resistor array. The 10-bit data is acquired by thermometer encoding to reduce the probability of DNL errors which are typically present in binary weighted architectures. This paper uses an auto-zeroing offset cancellation technique that can reduce the offset to 0.286 mV. The prototype chip realized the 10-bit SAR ADC fabricated in HLMC 55 nm CMOS technology with a core area of 167 × 87 μm2. It shows a sampling rate of 200 kS/s and low power dissipation of 607 μW operates at a 3.3 V analog supply voltage and a 1.2 V digital supply voltage. At the input frequency of 10 kHz the signal-to-noise-and-distortion ratio (SNDR) is 60.1 dB and the spurious-free dynamic range (SFDR) is 68.1 dB. The measured DNL is +0.37/−0.06 LSB and INL is +0.58/−0.22 LSB. (paper)
A single-ended 10-bit 200 kS/s 607 μW SAR ADC with an auto-zeroing offset cancellation technique
Weiru, Gu; Yimin, Wu; Fan, Ye; Junyan, Ren
2015-10-01
This paper presents a single-ended 8-channel 10-bit 200 kS/s 607 μW synchronous successive approximation register (SAR) analog-to-digital converter (ADC) using HLMC 55 nm low leakage (LL) CMOS technology with a 3.3 V/1.2 V supply voltage. In conventional binary-encoded SAR ADCs the total capacitance grows exponentially with resolution. In this paper a CR hybrid DAC is adopted to reduce both capacitance and core area. The capacitor array resolves 4 bits and the other 6 bits are resolved by the resistor array. The 10-bit data is acquired by thermometer encoding to reduce the probability of DNL errors which are typically present in binary weighted architectures. This paper uses an auto-zeroing offset cancellation technique that can reduce the offset to 0.286 mV. The prototype chip realized the 10-bit SAR ADC fabricated in HLMC 55 nm CMOS technology with a core area of 167 × 87 μm2. It shows a sampling rate of 200 kS/s and low power dissipation of 607 μW operates at a 3.3 V analog supply voltage and a 1.2 V digital supply voltage. At the input frequency of 10 kHz the signal-to-noise-and-distortion ratio (SNDR) is 60.1 dB and the spurious-free dynamic range (SFDR) is 68.1 dB. The measured DNL is +0.37/-0.06 LSB and INL is +0.58/-0.22 LSB. Project supported by the National Science and Technology Support Program of China (No. 2012BAI13B07) and the National Science and Technology Major Project of China (No.2012ZX03001020-003).
Optimal bounds for quantum bit commitment
Chailloux, André
2011-01-01
Bit commitment is a fundamental cryptographic primitive with numerous applications. Quantum information allows for bit commitment schemes in the information theoretic setting where no dishonest party can perfectly cheat. The previously best-known quantum protocol by Ambainis achieved a cheating probability of at most 3/4[Amb01]. On the other hand, Kitaev showed that no quantum protocol can have cheating probability less than 1/sqrt{2} [Kit03] (his lower bound on coin flipping can be easily extended to bit commitment). Closing this gap has since been an important and open question. In this paper, we provide the optimal bound for quantum bit commitment. We first show a lower bound of approximately 0.739, improving Kitaev's lower bound. We then present an optimal quantum bit commitment protocol which has cheating probability arbitrarily close to 0.739. More precisely, we show how to use any weak coin flipping protocol with cheating probability 1/2 + eps in order to achieve a quantum bit commitment protocol with ...
Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan
2015-01-01
Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI. PMID:25714917
Experimental Quantum Error Rejection for Quantum Communication
Chen, Yu-Ao; Zhang, An-Ning; Zhao, Zhi; Zhou, Xiao-Qi; Pan, Jian-Wei
2005-01-01
We report an experimental realization of bit-flip error rejection for error-free transfer of quantum information through a noisy quantum channel. In the experiment, an unknown state to be transmitted is encoded into a two-photon entangled state, which is then sent through an engineered noisy quantum channel. At the final stage, the unknown state is decoded by a quantum parity measurement, successfully rejecting the erroneous transmission over the noisy quantum channel.
Influence of pseudorandom bit format on the direct modulation performance of semiconductor lasers
Indian Academy of Sciences (India)
Moustafa Ahmed; Safwat W Z Mahmoud; Alaa A Mohmoud
2012-12-01
This paper investigates the direct gigabit modulation characteristics of semiconductor lasers using the return to zero (RZ) and non-return to zero (NRZ) formats. The modulation characteristics include the frequency chirp, eye diagram, and turn-on jitter (TOJ). The differences in the relative contributions of the intrinsic noise of the laser and the pseudorandom bit-pattern effect to the modulation characteristics are presented. We introduce an approximate estimation to the transient properties that control the digital modulation performance, namely, the modulation bit rate and the minimum (setting) bit rate required to yield a modulated laser signal free from the bit pattern effect. The results showed that the frequency chirp increases with the increase of the modulation current under both RZ and NRZ formats, and decreases remarkably with the increase of the bias current. The chirp is higher under the RZ modulation format than under the NRZ format. When the modulation bit rate is higher than the setting bit rate of the relaxation oscillation, the laser exhibits enhanced TOJ and the eye diagram is partially closed. TOJ decreases with the increase of the bias and/or modulation current for both formats of modulation.
Bits extraction for palmprint template protection with Gabor magnitude and multi-bit quantization
Mu, Meiru; Shao, Xiaoying; Ruan, QiuQi; Spreeuwers, Luuk; Veldhuis, Raymond
2013-01-01
In this paper, we propose a method of fixed-length binary string extraction (denoted by LogGM_DROBA) from low-resolution palmprint image for developing palmprint template protection technology. In order to extract reliable (stable and discriminative) bits, multi-bit equal-probability-interval quanti
Directory of Open Access Journals (Sweden)
Arief Hendra Saptadi
2013-07-01
Full Text Available Penggunaan pewaktu/pencacah dalam sistem mikrokontroler memberikan keuntungan dalam hal tidak membebani sumber daya CPU dan memungkinkan CPU menjalankan tugas lain. Dengan tersedianya pilihan timer/counter 8 bit dan 16 bit, permasalahan yang muncul adalah pemilihan jenis pewaktu/pencacah yang akan digunakan. Dalam eksperimen yang dilakukan, sistem minimum mikrokontroler AVR ATmega8535 telah mencacah suatu bilangan dengan tepat, menggunakan dua pewaktu/pencacah yang berbeda, yaitu Timer/Counter 0 (8 bit dan Timer/Counter 1 (16 bit. Kondisi limpahan yang terjadi dalam siklus pencacahan 8 bit dan 16 bit, masing-masing akan mengaktifkan Register OCR0 dan OCR1AL. Sinyal keluaran dari port B.3 (OC0 dan port D.5 (OC1A selanjutnya dimasukkan ke osiloskop dan dibandingkan. Dari hasil pengamatan sinyal keluaran telah dibuktikan bahwa kedua jenis pewaktu/pencacah tersebut memiliki kecepatan yang sama. Dengan demikian dapat disimpulkan bahwa pemilihan pewaktu/pencacah akan lebih didasarkan pada fleksibilitas rentang pencacahan, ukuran program maupun waktu eksekusi. Kata kunci : Pewaktu, Pencacah, 8 bit, 16 bit, Mikrokontroler
Institute of Scientific and Technical Information of China (English)
WUXiaojun; YINQinye; ZENGMing; LIXing; WANGJilong
2004-01-01
In very high data-rate wireless application scenarios, Multicarrier code-division multiple access (MC-CDMA) systems including Serial-to-parallel (S/P) converting operation are more applicable. We name them as modified MC-CDMA systems. In this paper, we focus on the blind channel estimation problem of these modified MC-CDMA systems on uplink. Because we can regard each subcarrier in multicarrier communications as a channel, the modified MC-CDMA system accordingly can become a multichannel system. Upon this understanding, we model the multiuser modified MC-CDMA system as a Multiple-input multiple-output (MIMO) system. Successively, based on subspace decomposition technique, we derive a novel blind estimation scheme of uplink channels for multiuser modified MC-CDMA systems. Furthermore, based on perturbation techniques, we derive the analytical approximation of the Mean-squared error (MSE) of this blind channel estimation scheme. Extensive computer simulations illustrate the performance of the proposed algorithm, and simulation results also verify the tightness of the MSE approximation.
Das, Bikramaditya; 10.5121/jgraphhoc.2010.2104
2010-01-01
For high data rate ultra wideband communication system, performance comparison of Rake, MMSE and Rake-MMSE receivers is attempted in this paper. Further a detail study on Rake-MMSE time domain equalizers is carried out taking into account all the important parameters such as the effect of the number of Rake fingers and equalizer taps on the error rate performance. This receiver combats inter-symbol interference by taking advantages of both the Rake and equalizer structure. The bit error rate performances are investigated using MATLAB simulation on IEEE 802.15.3a defined UWB channel models. Simulation results show that the bit error rate probability of Rake-MMSE receiver is much better than Rake receiver and MMSE equalizer. Study on non-line of sight indoor channel models illustrates that bit error rate performance of Rake-MMSE (both LE and DFE) improves for CM3 model with smaller spread compared to CM4 channel model. It is indicated that for a MMSE equalizer operating at low to medium SNR values, the number o...
Introduction to bit slices and microprogramming
International Nuclear Information System (INIS)
Bit-slice logic blocks are fourth-generation LSI components which are natural extensions of traditional mulitplexers, registers, decoders, counters, ALUs, etc. Their functionality is controlled by microprogramming, typically to implement CPUs and peripheral controllers where both speed and easy programmability are required for flexibility, ease of implementation and debugging, etc. Processors built from bit-slice logic give the designer an alternative for approaching the programmibility of traditional fixed-instruction-set microprocessors with a speed closer to that of hardwired random logic. (orig.)
Factorization of a 512-bit RSA modulus
Cavallar, S.H.; Lioen, W.M.; Riele, te, H.; Dodson, B.; Lenstra, A.K.; Montgomery, P. L.; Murphy, B.
2000-01-01
On August 22, 1999, we completed the factorization of the 512--bit 155--digit number RSA--155 with the help of the Number Field Sieve factoring method (NFS). This is a new record for factoring general numbers. Moreover, 512--bit RSA keys are frequently used for the protection of electronic commerce---at least outside the USA---so this factorization represents a breakthrough in research on RSA--based systems. The previous record, factoring the 140--digit number RSA--140, was established on Feb...
Fixed-Length Error Resilient Code and Its Application in Video Coding
Institute of Scientific and Technical Information of China (English)
FANChen; YANGMing; CUIHuijuan; TANGKun
2003-01-01
Since popular entropy coding techniques such as Variable-length code (VLC) tend to cause severe error propagation in noisy environments, an error resilient entropy coding technique named Fixed-length error resilient code (FLERC) is proposed to mitigate the problem. It is found that even for a non-stationary source, the probability of error propagation could be minimized through introducing intervals into the codeword space of the fixed-length codes. FLERC is particularly suitable for the entropy coding for video signals in error-prone environments, where a little distortion is tolerable, but severe error propagation would lead to fatal consequences. An iterative construction algorithm for FLERC is presented in this paper. In addition, FLERC is adopted instead of VLC as the entropy coder of the DCT coefficients in H.263++Data partitioning slice (DPS) mode, and tested on noisy channels. The simulation results show that this scheme outperforms the scheme of H.263++ combined with FEC when the channel noise is highly extensive, since the error propagation is effectively suppressed by using FLERC. Moreover, it is observed that the reconstructed video quality degrades gracefully as the bit error rate increases.
Encoding M classical bits in the arrival time of dense-coded photons
Hegazy, Salem F; Obayya, Salah S A
2016-01-01
We present a scheme to encode M extra classical bits to a dense-coded pair of photons. By tuning the delay of an entangled pair of photons to one of 2^M time-bins and then applying one of the quantum dense coding protocols, a receiver equipped with a synchronized clock of reference is able to decode M bits (via classical time-bin encoding) + 2 bits (via quantum dense coding). This protocol, yet simple, does not dispense several special features of the used programmable delay apparatus to maintain the coherence of the two-photon state. While this type of time-domain encoding may be thought to be ideally of boundless photonic capacity (by increasing the number of available time-bins), errors due to the environmental noise and the imperfect devices and channel evolve with the number of time-bins.
Robust Face Recognition using Voting by Bit-plane Images based on Sparse Representation
Directory of Open Access Journals (Sweden)
Dongmei Wei
2015-08-01
Full Text Available Plurality voting is widely employed as combination strategies in pattern recognition. As a technology proposed recently, sparse representation based classification codes the query image as a sparse linear combination of entire training images and classifies the query sample class by class exploiting the class representation error. In this paper, an improvement face recognition approach using sparse representation and plurality voting based on the binary bit-plane images is proposed. After being equalized, gray images are decomposed into eight bit-plane images, sparse representation based classification is exploited respectively on the five bit-plane images that have more discrimination information. Finally, the true identity of query image is voted by these five identities obtained. Experiment results shown that this proposed approach is preferable both in recognition accuracy and in recognition speed.
Power of one bit of quantum information in quantum metrology
Cable, Hugo; Gu, Mile; Modi, Kavan
2016-04-01
We present a model of quantum metrology inspired by the computational model known as deterministic quantum computation with one quantum bit (DQC1). Using only one pure qubit together with l fully mixed qubits we obtain measurement precision (defined as root-mean-square error for the parameter being estimated) at the standard quantum limit, which is typically obtained using the same number of uncorrelated qubits in fully pure states. In principle, the standard quantum limit can be exceeded using an additional qubit which adds only a small amount of purity. We show that the discord in the final state vanishes only in the limit of attaining infinite precision for the parameter being estimated.
RELAY ASSISTED TRANSMISSSION WITH BIT-INTERLEAVED CODED MODULATION
Institute of Scientific and Technical Information of China (English)
Meng Qingmin; You Xiaohu; John Boyer
2006-01-01
We investigate an adaptive cooperative protocol in a Two-Hop-Relay (THR) wireless system that combines the following: (1) adaptive relaying based on repetition coding; (2) single or two transmit antennas and one receive antenna configurations for all nodes, each using high order constellation; (3) Bit-Interleaved Coded Modulation (BICM). We focus on a simple decoded relaying (i.e. no error correcting at a relay node)and simple signal quality thresholds for relaying. Then the impact of the two simple thresholds on the system performance is studied. Our results suggest that compared with the traditional scheme for direct transmission,the proposed scheme can increase average throughput in high spectral efficiency region with low implementation-cost at the relay.
Linear, Constant-rounds Bit-decomposition
DEFF Research Database (Denmark)
Reistad, Tord; Toft, Tomas
2010-01-01
When performing secure multiparty computation, tasks may often be simple or difficult depending on the representation chosen. Hence, being able to switch representation efficiently may allow more efficient protocols. We present a new protocol for bit-decomposition: converting a ring element x ∈ ℤ...
Bit-coded regular expression parsing
DEFF Research Database (Denmark)
Nielsen, Lasse; Henglein, Fritz
2011-01-01
Regular expression parsing is the problem of producing a parse tree of a string for a given regular expression. We show that a compact bit representation of a parse tree can be produced efficiently, in time linear in the product of input string size and regular expression size, by simplifying the...
1 /N perturbations in superstring bit models
Thorn, Charles B.
2016-03-01
We develop the 1 /N expansion for stable string bit models, focusing on a model with bit creation operators carrying only transverse spinor indices a =1 ,…,s . At leading order (N =∞ ), this model produces a (discretized) light cone string with a "transverse space" of s Grassmann worldsheet fields. Higher orders in the 1 /N expansion are shown to be determined by the overlap of a single large closed chain (discretized string) with two smaller closed chains. In the models studied here, the overlap is not accompanied with operator insertions at the break/join point. Then, the requirement that the discretized overlap has a smooth continuum limit leads to the critical Grassmann "dimension" of s =24 . This "protostring," a Grassmann analog of the bosonic string, is unusual, because it has no large transverse dimensions. It is a string moving in one space dimension, and there are neither tachyons nor massless particles. The protostring, derived from our pure spinor string bit model, has 24 Grassmann dimensions, 16 of which could be bosonized to form 8 compactified bosonic dimensions, leaving 8 Grassmann dimensions—the worldsheet content of the superstring. If the transverse space of the protostring could be "decompactified," string bit models might provide an appealing and solid foundation for superstring theory.
Algorithm of 32-bit Data Transmission Among Microcontrollers Through an 8-bit Port
Directory of Open Access Journals (Sweden)
Midriem Mirdanies
2015-12-01
Full Text Available This paper proposes an algorithm for 32-bit data transmission among microcontrollers through one 8-bit port. This method was motivated by a need to overcome limitations of microcontroller I/O as well as to fulfill the requirement of data transmission which is more than 10 bits. In this paper, the use of an 8-bit port has been optimized for 32-bit data transmission using unsigned long integer, long integer, and float types. Thirty-two bit data is extracted intobinary number, then sent through a series of 8-bit ports by transmitter microcontroller. At receiver microcontroller, the binary data received through 8-bit port is reconverted into 32 bits with the same data type. The algorithm has been implemented and tested using C language in ATMega32A microcontroller. Experiments have been done using two microcontrollers as well as four microcontrollers in the parallel, tree, and series connections. Based on the experiments, it is known that the data transmitted can be accurately received without data loss. Maximum transmission times among two microcontrollers for unsigned long integer, long integer, and float are 630 μs, 1,880 μs, and 7,830 μs, respectively. Maximum transmission times using four microcontrollers in parallel connection are the same as those using two microcontrollers, while in series connection are 1,930 μs for unsigned long integer, 5,640 μs for long integer, and 23,540 μs for float. The maximum transmission times of tree connection is close to those of the parallel connection. These results prove that the algorithm works well.
Directory of Open Access Journals (Sweden)
Bikramaditya Das
2010-03-01
Full Text Available For high data rate ultra wideband communication system, performance comparison of Rake, MMSE andRake-MMSE receivers is attempted in this paper. Further a detail study on Rake-MMSE time domainequalizers is carried out taking into account all the important parameters such as the effect of the numberof Rake fingers and equalizer taps on the error rate performance. This receiver combats inter-symbolinterference by taking advantages of both the Rake and equalizer structure. The bit error rateperformances are investigated using MATLAB simulation on IEEE 802.15.3a defined UWB channelmodels. Simulation results show that the bit error rate probability of Rake-MMSE receiver is much betterthan Rake receiver and MMSE equalizer. Study on non-line of sight indoor channel models illustratesthat bit error rate performance of Rake-MMSE (both LE and DFE improves for CM3 model with smallerspread compared to CM4 channel model. It is indicated that for a MMSE equalizer operating at low tomedium SNR values, the number of Rake fingers is the dominant factor to improve system performance,while at high SNR values the number of equalizer taps plays a more significant role in reducing the errorrate.
Akinci, E.; Oberle, M.; Maral, G.
A new and entirely digital technique for bit and detection of NRZ-L coded data based on a recursive extended Kalman filtering algorithm obtained from a state variable formulation, has been developed and implemented with a MC 6809 microprocessor. This technique applies to low rate data transmission (about 500 bit/s) over distortionless communication channels. The received data are supposed to be corrupted by additive white Gaussian noise with a signal to noise ratio E/No as low as 1 dB. Computer simulations and practical measurements obtained from a laboratory model are presented. The acquisition time with a 0.9 probability is less than 350 bit periods of random data sequences, and the detection performance exhibits a near 0.2 dB degradation. Results are also given for a limited bandwidth transmission channel inducing signal distortion and intersymbol interference.
International Nuclear Information System (INIS)
Objective: To put forward reasonable and feasible recommendations aiming at enhancing the application safety of afterloading unit, through studying the human reliability in the emergency response against the source blockage of afterloading unit. Methods: Based on the human cognition reliability model, ten operation errors during the emergency response against the source blockage of afterloading unit were analyzed and permissible time widow of emergency response operation were determined. The human error probability was calculated with the execution time of emergency response operation obtained through simulation, observation and recording. Results: The operation action, relevant permissible time window and execution time were obtained with the corresponding human error probabilities in the range 0.04-0.27. Conclusions: The human error model in emergency response against the source blockage of afterloading unit based on HCRmodel is feasible, and provides important reference basis to reduce the occurrence of potential exposure and mitigate the consequence of potential exposure. (authors)
High energy hadron-induced errors in memory chips
International Nuclear Information System (INIS)
We have measured probabilities for proton, neutron and pion beams from accelerators to induce temporary or soft errors in a wide range of modern 16 Mb and 64 Mb dRAM memory chips, typical of those used in aircraft electronics. Relations among the cross sections for these particles are deduced, and failure rates for aircraft avionics due to cosmic rays are evaluated. Measurement of alpha pha particle yields from pions on aluminum, as a surrogate for silicon, indicate that these reaction products are the proximate cause of the charge deposition resulting in errors. Heavy ions can cause damage to solar panels and other components in satellites above the atmosphere, by the heavy ionization trails they leave. However, at the earth's surface or at aircraft altitude it is known that cosmic rays, other than heavy ions, can cause soft errors in memory circuit components. Soft errors are those confusions between ones and zeroes that cause wrong contents to be stored in the memory, but without causing permanent damage to the circuit. As modern aircraft rely increasingly upon computerized and automated systems, these soft errors are important threats to safety. Protons, neutrons and pions resulting from high energy cosmic ray bombardment of the atmosphere pervade our environment. These particles do not induce damage directly by their ionization loss, but rather by reactions in the materials of the microcircuits. We have measured many cross sections for soft error upsets (SEU) in a broad range of commercial 16 Mb and 64 Mb dRAMs with accelerator beams. Here we define σ SEU = induced errors/number of sample bits x particles/cm2. We compare σ SEU to find relations among results for these beams, and relations to reaction cross sections in order to systematize effects. We have modelled cosmic ray effects upon the components we have studied. (Author)
Horowitz-Kraus, Tzipi; Breznitz, Zvia
2014-01-28
Dyslexia is characterized by slow, inaccurate reading and by deficits in executive functions. The deficit in reading is exemplified by impaired error monitoring, which can be specifically shown through neuroimaging, in changes in Error-/Correct-related negativities (ERN/CRN). The current study aimed to investigate whether a reading intervention program (Reading Acceleration Program, or RAP) could improve overall reading, as well as error monitoring and other cognitive abilities underlying reading, in adolescents with reading difficulties. Participants with reading difficulties and typical readers were trained with the RAP for 8 weeks. Their reading and error monitoring were characterized both behaviorally and electrophysiologically through a lexical decision task. Behaviorally, the reading training improved "contextual reading speed" and decreased reading errors in both groups. Improvements were also seen in speed of processing, memory and visual screening. Electrophysiologically, ERN increased in both groups following training, but the increase was significantly greater in the participants with reading difficulties. Furthermore, an association between the improvement in reading speed and the change in difference between ERN and CRN amplitudes following training was seen in participants with reading difficulties. These results indicate that improving deficits in error monitoring and speed of processing are possible underlying mechanisms of the RAP intervention. We suggest that ERN is a good candidate for use as a measurement in evaluating the effect of reading training in typical and disabled readers. PMID:24316242
On the BER and capacity analysis of MIMO MRC systems with channel estimation error
Yang, Liang
2011-10-01
In this paper, we investigate the effect of channel estimation error on the capacity and bit-error rate (BER) of a multiple-input multiple-output (MIMO) transmit maximal ratio transmission (MRT) and receive maximal ratio combining (MRC) systems over uncorrelated Rayleigh fading channels. We first derive the ergodic (average) capacity expressions for such systems when power adaptation is applied at the transmitter. The exact capacity expression for the uniform power allocation case is also presented. Furthermore, to investigate the diversity order of MIMO MRT-MRC scheme, we derive the BER performance under a uniform power allocation policy. We also present an asymptotic BER performance analysis for the MIMO MRT-MRC system with multiuser diversity. The numerical results are given to illustrate the sensitivity of the main performance to the channel estimation error and the tightness of the approximate cutoff value. © 2011 IEEE.
Quantum error-correcting codes need not completely reveal the error syndrome
Shor, P W; Shor, Peter W; Smolin, John A
1996-01-01
Quantum error-correcting codes so far proposed have not been able to work in the presence of noise levels which introduce greater than one bit of entropy per qubit sent through the quantum channel. This has been because all such codes either find the complete error syndrome of the noise or trivially map onto such codes. We describe a code which does not find complete information on the noise and can be used for reliable transmission of quantum information through channels which introduce more than one bit of entropy per transmitted bit. In the case of the depolarizing ``Werner'' channel our code can be used in a channel of fidelity .8096 while the best existing code worked only down to .8107.
Variable bit rate video traffic modeling by multiplicative multifractal model
Institute of Scientific and Technical Information of China (English)
Huang Xiaodong; Zhou Yuanhua; Zhang Rongfu
2006-01-01
Multiplicative multifractal process could well model video traffic. The multiplier distributions in the multiplicative multifractal model for video traffic are investigated and it is found that Gaussian is not suitable for describing the multipliers on the small time scales. A new statistical distribution-symmetric Pareto distribution is introduced. It is applied instead of Gaussian for the multipliers on those scales. Based on that, the algorithm is updated so that symmetric pareto distribution and Gaussian distribution are used to model video traffic but on different time scales. The simulation results demonstrate that the algorithm could model video traffic more accurately.
Fast optical signal processing in high bit rate OTDM systems
DEFF Research Database (Denmark)
Poulsen, Henrik Nørskov; Jepsen, Kim Stokholm; Clausen, Anders; Buxens Azcoaga, Alvaro Juan; Stubkjær, Kristian; Hess, R.; Dülk, M.; Melchior, H.
As all-optical signal processing is maturing, optical time division multiplexing (OTDM) has also gained interest for simple networking in high capacity backbone networks. As an example of a network scenario we show an OTDM bus interconnecting another OTDM bus, a single high capacity user represen...... represented by an optical termination (OT) and a WDM area...
... the eye keeps you from focusing well. The cause could be the length of the eyeball (longer or shorter), changes in the shape of the cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...
... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...
Directory of Open Access Journals (Sweden)
Fabien Hernandez
Full Text Available To assess the impact of the implementation of a Computerized Physician Order Entry (CPOE associated with a pharmaceutical checking of medication orders on medication errors in the 3 stages of drug management (i.e. prescription, dispensing and administration in an orthopaedic surgery unit.A before-after observational study was conducted in the 66-bed orthopaedic surgery unit of a teaching hospital (700 beds in Paris France. Direct disguised observation was used to detect errors in prescription, dispensing and administration of drugs, before and after the introduction of computerized prescriptions. Compliance between dispensing and administration on the one hand and the medical prescription on the other hand was studied. The frequencies and types of errors in prescribing, dispensing and administration were investigated.During the pre and post-CPOE period (two days for each period 111 and 86 patients were observed, respectively, with corresponding 1,593 and 1,388 prescribed drugs. The use of electronic prescribing led to a significant 92% decrease in prescribing errors (479/1593 prescribed drugs (30.1% vs 33/1388 (2.4%, p < 0.0001 and to a 17.5% significant decrease in administration errors (209/1222 opportunities (17.1% vs 200/1413 (14.2%, p < 0.05. No significant difference was found in regards to dispensing errors (430/1219 opportunities (35.3% vs 449/1407 (31.9%, p = 0.07.The use of CPOE and a pharmacist checking medication orders in an orthopaedic surgery unit reduced the incidence of medication errors in the prescribing and administration stages. The study results suggest that CPOE is a convenient system for improving the quality and safety of drug management.
The Economics of BitCoin Price Formation
Ciaian, Pavel; Rajcaniova, Miroslava; Kancs, d'Artis
2014-01-01
This paper analyses the relationship between BitCoin price and supply-demand fundamentals of BitCoin, global macro-financial indicators and BitCoin’s attractiveness for investors. Using daily data for the period 2009-2014 and applying time-series analytical mechanisms, we find that BitCoin market fundamentals and BitCoin’s attractiveness for investors have a significant impact on BitCoin price. Our estimates do not support previous findings that the macro-financial developments are driving Bi...
Error control for reliable digital data transmission and storage systems
Costello, D. J., Jr.; Deng, R. H.
1985-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.
Progressive and Error-Resilient Transmission Strategies for VLC Encoded Signals over Noisy Channels
Jégou, Hervé; Guillemot, Christine
2006-12-01
This paper addresses the issue of robust and progressive transmission of signals (e.g., images, video) encoded with variable length codes (VLCs) over error-prone channels. This paper first describes bitstream construction methods offering good properties in terms of error resilience and progressivity. In contrast with related algorithms described in the literature, all proposed methods have a linear complexity as the sequence length increases. The applicability of soft-input soft-output (SISO) and turbo decoding principles to resulting bitstream structures is investigated. In addition to error resilience, the amenability of the bitstream construction methods to progressive decoding is considered. The problem of code design for achieving good performance in terms of error resilience and progressive decoding with these transmission strategies is then addressed. The VLC code has to be such that the symbol energy is mainly concentrated on the first bits of the symbol representation (i.e., on the first transitions of the corresponding codetree). Simulation results reveal high performance in terms of symbol error rate (SER) and mean-square reconstruction error (MSE). These error-resilience and progressivity properties are obtained without any penalty in compression efficiency. Codes with such properties are of strong interest for the binarization of[InlineEquation not available: see fulltext.]-ary sources in state-of-the-art image, and video coding systems making use of, for example, the EBCOT or CABAC algorithms. A prior statistical analysis of the signal allows the construction of the appropriate binarization code.
Progressive and Error-Resilient Transmission Strategies for VLC Encoded Signals over Noisy Channels
Directory of Open Access Journals (Sweden)
Guillemot Christine
2006-01-01
Full Text Available This paper addresses the issue of robust and progressive transmission of signals (e.g., images, video encoded with variable length codes (VLCs over error-prone channels. This paper first describes bitstream construction methods offering good properties in terms of error resilience and progressivity. In contrast with related algorithms described in the literature, all proposed methods have a linear complexity as the sequence length increases. The applicability of soft-input soft-output (SISO and turbo decoding principles to resulting bitstream structures is investigated. In addition to error resilience, the amenability of the bitstream construction methods to progressive decoding is considered. The problem of code design for achieving good performance in terms of error resilience and progressive decoding with these transmission strategies is then addressed. The VLC code has to be such that the symbol energy is mainly concentrated on the first bits of the symbol representation (i.e., on the first transitions of the corresponding codetree. Simulation results reveal high performance in terms of symbol error rate (SER and mean-square reconstruction error (MSE. These error-resilience and progressivity properties are obtained without any penalty in compression efficiency. Codes with such properties are of strong interest for the binarization of -ary sources in state-of-the-art image, and video coding systems making use of, for example, the EBCOT or CABAC algorithms. A prior statistical analysis of the signal allows the construction of the appropriate binarization code.
ENHANCED RABIN ALGORITHM BASED ERROR CONTROL MECHANISM FOR WIRELESS SENSOR NETWORKS
Directory of Open Access Journals (Sweden)
M.R.Ebenezar Jebarani
2012-12-01
Full Text Available In wireless sensor nodes, the data transmitted from the sensor nodes are prune to corruption by induced errors by noisy channels and other relevant parameters. Hence it is always vital to provide an effective and efficient error control methodology to minimize the bit error rate (BER.Due to the presence of scarce energy available in thesensor networks, it is important to use a high throughput, low end to end delay and energy aware error control scheme. In this paper, the performance analysis of three error control codes namely Enhanced Rabin Algorithm Based HARQ (ERABHARQ, Enhanced linear feedback shift register based mechanism (ELFSRM and Hadamard code are analyzed based on the performance metrics namely Throughput, BER, End to End Delay and energy utilization by varying the sensor nodes. To elaborate the error control schemes with different situational parameters are simulated using ns-2. The Enhanced Rabin Algorithm Based HARQ code is the improved methodology when compared to Automatic Repeat Request, because the retransmission of the packetsdo not takes place automatically rather than it takes place based on the success or failure of the Enhanced Rabin’s Algorithm. In this paper, three different error control codes are compared and it is concluded that Enhanced Rabin Algorithm Based HARQ performs better and it is well suited for wireless sensor networks
Supersymmetric quantum mechanics for string-bits
International Nuclear Information System (INIS)
The authors develop possible versions of supersymmetric single particle quantum mechanics, with application to superstring-bit models in view. The authors focus principally on space dimensions d = 1,2,4,8, the transverse dimensionalities of superstring in 3, 4, 7, 10 space-time dimensions. These are the cases for which classical superstring makes sense, and also the values of d for which Hooke's force law is compatible with the simplest superparticle dynamics. The basic question they address is: when is it possible to replace such harmonic force laws with more general ones, including forces which vanish at large distances? This is an important question because forces between string-bits that do not fall off with distance will almost certainly destroy cluster decomposition. They show that the answer is affirmative for d = 1,2, negative for d = 8, and so far inconclusive for d = 4
Global Networks of Trade and Bits
Riccaboni, Massimo; Schiavo, Stefano
2012-01-01
Considerable efforts have been made in recent years to produce detailed topologies of the Internet. Although Internet topology data have been brought to the attention of a wide and somewhat diverse audience of scholars, so far they have been overlooked by economists. In this paper, we suggest that such data could be effectively treated as a proxy to characterize the size of the "digital economy" at country level and outsourcing: thus, we analyse the topological structure of the network of trade in digital services (trade in bits) and compare it with that of the more traditional flow of manufactured goods across countries. To perform meaningful comparisons across networks with different characteristics, we define a stochastic benchmark for the number of connections among each country-pair, based on hypergeometric distribution. Original data are thus filtered by means of different thresholds, so that we only focus on the strongest links, i.e., statistically significant links. We find that trade in bits displays...
Not One Bit of de Sitter Information
Parikh, Maulik K.; van der Schaar, Jan Pieter
2008-01-01
We formulate the information paradox in de Sitter space in terms of the no-cloning principle of quantum mechanics. We show that energy conservation puts an upper bound on the maximum entropy available to any de Sitter observer. Combined with a general result on the average information in a quantum subsystem, this guarantees that an observer in de Sitter space cannot obtain even a single bit of information from the de Sitter horizon, thereby preventing any observable violations of the quantum ...
Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding
Directory of Open Access Journals (Sweden)
Dai Qionghai
2010-01-01
Full Text Available We propose a Stereoscopic Visual Attention- (SVA- based regional bit allocation optimization for Multiview Video Coding (MVC by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over % bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by dB at the cost of insensitive image quality degradation of the background image.
Error-resilient compression and transmission of scalable video
Cho, Sungdae; Pearlman, William A.
2000-12-01
Compressed video bitstreams require protection from channel errors in a wireless channel and protection from packet loss in a wired ATM channel. The three-dimensional (3-D) SPIHT coder has proved its efficiency and its real-time capability in compression of video. A forward-error-correcting (FEC) channel (RCPC) code combined with a single ARQ (automatic- repeat-request) proved to be an effective means for protecting the bitstream. There were two problems with this scheme: the noiseless reverse channel ARQ may not be feasible in practice; and, in the absence of channel coding and ARQ, the decoded sequence was hopelessly corrupted even for relatively clean channels. In this paper, we first show how to make the 3-D SPIHT bitstream more robust to channel errors by breaking the wavelet transform into a number of spatio-temporal tree blocks which can be encoded and decoded independently. This procedure brings the added benefit of parallelization of the compression and decompression algorithms. Then we demonstrate the packetization of the bit stream and the reorganization of these packets to achieve scalability in bit rate and/or resolution in addition to robustness. Then we encode each packet with a channel code. Not only does this protect the integrity of the packets in most cases, but it also allows detection of packet decoding failures, so that only the cleanly recovered packets are reconstructed. This procedure obviates ARQ, because the performance is only about 1 dB worse than normal 3-D SPIHT with FEC and ARQ. Furthermore, the parallelization makes possible real-time implementation in hardware and software.
The BitTorrent Anonymity Marketplace
Nielson, Seth James
2011-01-01
The very nature of operations in peer-to-peer systems such as BitTorrent exposes information about participants to their peers. Nodes desiring anonymity, therefore, often chose to route their peer-to-peer traffic through anonymity relays, such as Tor. Unfortunately, these relays have little incentive for contribution and struggle to scale with the high loads that P2P traffic foists upon them. We propose a novel modification for BitTorrent that we call the BitTorrent Anonymity Marketplace. Peers in our system trade in k swarms obscuring the actual intent of the participants. But because peers can cross-trade torrents, the k-1 cover traffic can actually serve a useful purpose. This creates a system wherein a neighbor cannot determine if a node actually wants a given torrent, or if it is only using it as leverage to get the one it really wants. In this paper, we present our design, explore its operation in simulation, and analyze its effectiveness. We demonstrate that the upload and download characteristics of c...
1/N Perturbations in Superstring Bit Models
Thorn, Charles B
2015-01-01
We develop the 1/N expansion for stable string bit models, focusing on a model with bit creation operators carrying only transverse spinor indices a=1,...,s. At leading order (1/N=0), this model produces a (discretized) lightcone string with a "transverse space' of $s$ Grassmann worldsheet fields. Higher orders in the 1/N expansion are shown to be determined by the overlap of a single large closed chain (discretized string) with two smaller closed chains. In the models studied here, the overlap is not accompanied with operator insertions at the break/join point. Then the requirement that the discretized overlap have a smooth continuum limit leads to the critical Grassmann "dimension" of s=24. This "protostring", a Grassmann analog of the bosonic string, is unusual, because it has no large transverse dimensions. It is a string moving in one space dimension and there are neither tachyons nor massless particles. The protostring, derived from our pure spinor string bit model, has 24 Grassmann dimensions, 16 of wh...
Acquisition and Retaining Granular Samples via a Rotating Coring Bit
Bar-Cohen, Yoseph; Badescu, Mircea; Sherrit, Stewart
2013-01-01
This device takes advantage of the centrifugal forces that are generated when a coring bit is rotated, and a granular sample is entered into the bit while it is spinning, making it adhere to the internal wall of the bit, where it compacts itself into the wall of the bit. The bit can be specially designed to increase the effectiveness of regolith capturing while turning and penetrating the subsurface. The bit teeth can be oriented such that they direct the regolith toward the bit axis during the rotation of the bit. The bit can be designed with an internal flute that directs the regolith upward inside the bit. The use of both the teeth and flute can be implemented in the same bit. The bit can also be designed with an internal spiral into which the various particles wedge. In another implementation, the bit can be designed to collect regolith primarily from a specific depth. For that implementation, the bit can be designed such that when turning one way, the teeth guide the regolith outward of the bit and when turning in the opposite direction, the teeth will guide the regolith inward into the bit internal section. This mechanism can be implemented with or without an internal flute. The device is based on the use of a spinning coring bit (hollow interior) as a means of retaining granular sample, and the acquisition is done by inserting the bit into the subsurface of a regolith, soil, or powder. To demonstrate the concept, a commercial drill and a coring bit were used. The bit was turned and inserted into the soil that was contained in a bucket. While spinning the bit (at speeds of 600 to 700 RPM), the drill was lifted and the soil was retained inside the bit. To prove this point, the drill was turned horizontally, and the acquired soil was still inside the bit. The basic theory behind the process of retaining unconsolidated mass that can be acquired by the centrifugal forces of the bit is determined by noting that in order to stay inside the interior of the bit, the
The Impact of Forward Error Correction on Wireless Sensor Network Performance
Busse, Marcel; Haenselmann, Thomas; Effelsberg, Wolfgang
2006-01-01
In networks there are basically two methods to tackle the problem of erroneous packets: Automatic Repeat Requests (ARQ) and Forward Error Correction (FEC). While ARQ means packet retransmissions, FEC uses additional bits to detect and correct distorted data. However, extensive field test of our sensor nodes have shown that FEC can take effect only as long as both sender and receiver are bit-wise synchronized. Otherwise, all following bits are misinterpreted which results in an uncorrectable n...
HIGH-POWER TURBODRILL AND DRILL BIT FOR DRILLING WITH COILED TUBING
Energy Technology Data Exchange (ETDEWEB)
Robert Radtke; David Glowka; Man Mohan Rai; David Conroy; Tim Beaton; Rocky Seale; Joseph Hanna; Smith Neyrfor; Homer Robertson
2008-03-31
Commercial introduction of Microhole Technology to the gas and oil drilling industry requires an effective downhole drive mechanism which operates efficiently at relatively high RPM and low bit weight for delivering efficient power to the special high RPM drill bit for ensuring both high penetration rate and long bit life. This project entails developing and testing a more efficient 2-7/8 in. diameter Turbodrill and a novel 4-1/8 in. diameter drill bit for drilling with coiled tubing. The high-power Turbodrill were developed to deliver efficient power, and the more durable drill bit employed high-temperature cutters that can more effectively drill hard and abrasive rock. This project teams Schlumberger Smith Neyrfor and Smith Bits, and NASA AMES Research Center with Technology International, Inc (TII), to deliver a downhole, hydraulically-driven power unit, matched with a custom drill bit designed to drill 4-1/8 in. boreholes with a purpose-built coiled tubing rig. The U.S. Department of Energy National Energy Technology Laboratory has funded Technology International Inc. Houston, Texas to develop a higher power Turbodrill and drill bit for use in drilling with a coiled tubing unit. This project entails developing and testing an effective downhole drive mechanism and a novel drill bit for drilling 'microholes' with coiled tubing. The new higher power Turbodrill is shorter, delivers power more efficiently, operates at relatively high revolutions per minute, and requires low weight on bit. The more durable thermally stable diamond drill bit employs high-temperature TSP (thermally stable) diamond cutters that can more effectively drill hard and abrasive rock. Expectations are that widespread adoption of microhole technology could spawn a wave of 'infill development' drilling of wells spaced between existing wells, which could tap potentially billions of barrels of bypassed oil at shallow depths in mature producing areas. At the same time, microhole
De-anonymizing BitTorrent Users on Tor
Le Blond, Stevens; Manils, Pere; Chaabane, Abdelberi; Kaafar, Mohamed Ali; Legout, Arnaud; Castellucia, Claude; Dabbous, Walid
2010-01-01
Some BitTorrent users are running BitTorrent on top of Tor to preserve their privacy. In this extended abstract, we discuss three different attacks to reveal the IP address of BitTorrent users on top of Tor. In addition, we exploit the multiplexing of streams from different applications into the same circuit to link non-BitTorrent applications to revealed IP addresses.
Method to manufacture bit patterned magnetic recording media
Raeymaekers, Bart; Sinha, Dipen N
2014-05-13
A method to increase the storage density on magnetic recording media by physically separating the individual bits from each other with a non-magnetic medium (so-called bit patterned media). This allows the bits to be closely packed together without creating magnetic "cross-talk" between adjacent bits. In one embodiment, ferromagnetic particles are submerged in a resin solution, contained in a reservoir. The bottom of the reservoir is made of piezoelectric material.
Quantum bit commitment with cheat sensitive binding and approximate sealing
Li, Yan-Bing; Xu, Sheng-Wei; Huang, Wei; Wan, Zhong-Jie
2014-01-01
This paper proposes a cheat sensitive quantum bit commitment (CSQBC) scheme based on single photons, in which Alice commits a bit to Bob. Here, Bob only can cheat the committed bit with probability close to $0$ with the increasing of used single photons' amount. And if Alice altered her committed bit after commitment phase, she will be detected with probability close to $1$ with the increasing of used single photons' amount. The scheme is easy to be realized with nowadays technology.
Digital dual-rate burst-mode receiver for 10G and 1G coexistence in optical access networks.
Mendinueta, José Manuel Delgado; Mitchell, John E; Bayvel, Polina; Thomsen, Benn C
2011-07-18
A digital dual-rate burst-mode receiver, intended to support 10 and 1 Gb/s coexistence in optical access networks, is proposed and experimentally characterized. The receiver employs a standard DC-coupled photoreceiver followed by a 20 GS/s digitizer and the detection of the packet presence and line-rate is implemented in the digital domain. A polyphase, 2 samples-per-bit digital signal processing algorithm is then used for efficient clock and data recovery of the 10/1.25 Gb/s packets. The receiver performance is characterized in terms of sensitivity and dynamic range under burst-mode operation for 10/1.25 Gb/s intensity modulated data in terms of both the packet error rate (PER) and the payload bit error rate (pBER). The impact of packet preamble lengths of 16, 32, 48, and 64 bits, at 10 Gb/s, on the receiver performance is investigated. We show that there is a trade-off between pBER and PER that is limited by electrical noise and digitizer clipping at low and high received powers, respectively, and that a 16/2-bit preamble at 10/1.25 Gb/s is sufficient to reliably detect packets at both line-rates over a burst-to-burst dynamic range of 14,5 dB with a sensitivity of -18.5 dBm at 10 Gb/s. PMID:21934767
Development of experimental apparatus about reverse circulation bit
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A set of experimental apparatus on reverse circulation bit are developed, in order to lucubrate the mechanism of the new type reverse circulation bits, and the structure of the bits influencing the ability of taking core and carrying powder.Both the major structure of the equipment and the procession of experiment are described.
High Reproduction Rate versus Sexual Fidelity
Sousa, A.O.; de Oliveira, S. Moss
2000-01-01
We introduce fidelity into the bit-string Penna model for biological ageing and study the advantage of this fidelity when it produces a higher survival probability of the offspring due to paternal care. We attribute a lower reproduction rate to the faithful males but a higher death probability to the offspring of non-faithful males that abandon the pups to mate other females. The fidelity is considered as a genetic trait which is transmitted to the male offspring (with or without error). We s...
Thermodynamics of Error Correction
Sartori, Pablo; Pigolotti, Simone
2015-10-01
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Choi, Woo Young; Han, Jae Hwan; Cha, Tae Min
2016-05-01
Multi-bit nano-electromechanical (NEM) nonvolatile memory cells such as T cells were proposed for higher memory density. However, they suffered from bit-to-bit interference (BI). In order to suppress BI without sacrificing cell size, this paper proposes zigzag T cell structures. The BI suppression of the proposed zigzag T cell is verified by finite-element modeling (FEM). Based on the FEM results, the design of zigzag T cells is optimized. PMID:27483893
Preventing twisting by regulating the bit load by hydromonitor effect
Energy Technology Data Exchange (ETDEWEB)
Nazirov, S.A.; Agishev, A.S.
1984-01-01
The problem is examined of reducing the twisting rate of a well during turbine drilling under complex geological conditions, without decreasing the mechanical drilling rate. An attempt is made to prevent bending of the bottom of the drilling tool when caverns are formed during drilling, when the elements of the rigid KNBK act as large-sized couplings. By using the hydromonitor effect, the rigid guide strings are crushed in the necessary direction without allowing bending and by deepening them into the jet-drilled well shaft. Insofar as on many fields well twisting occurs mainly in the drilling intervals formed by clay rocks, use of the hydromonitor effect on the bit is one of the optimal measures for preventing twistings.
Visible light communication using mobile-phone camera with data rate higher than frame rate.
Chow, Chi-Wai; Chen, Chung-Yen; Chen, Shih-Hao
2015-10-01
Complementary Metal-Oxide-Semiconductor (CMOS) image sensors are widely used in mobile-phone and cameras. Hence, it is attractive if these image sensors can be used as the visible light communication (VLC) receivers (Rxs). However, using these CMOS image sensors are challenging. In this work, we propose and demonstrate a VLC link using mobile-phone camera with data rate higher than frame rate of the CMOS image sensor. We first discuss and analyze the features of using CMOS image sensor as VLC Rx, including the rolling shutter effect, overlapping of exposure time of each row of pixels, frame-to-frame processing time gap, and also the image sensor "blooming" effect. Then, we describe the procedure of synchronization and demodulation. This includes file format conversion, grayscale conversion, column matrix selection avoiding blooming, polynomial fitting for threshold location. Finally, the evaluation of bit-error-rate (BER) is performed satisfying the forward error correction (FEC) limit. PMID:26480122
Not one bit of de Sitter information
International Nuclear Information System (INIS)
We formulate the information paradox in de Sitter space in terms of the no-cloning principle of quantum mechanics. We show that energy conservation puts an upper bound on the maximum entropy available to any de Sitter observer. Combined with a general result on the average information in a quantum subsystem, this guarantees that an observer in de Sitter space cannot obtain even a single bit of information from the de Sitter horizon, thereby preventing any observable violations of the quantum no-cloning principle. The result supports the notion of observer complementarity.
Entangled solitons and stochastic Q-bits
International Nuclear Information System (INIS)
Stochastic realization of the wave function in quantum mechanics with the inclusion of soliton representation of extended particles is discussed. Two-solitons configurations are used for constructing entangled states in generalized quantum mechanics dealing with extended particles, endowed with nontrivial spin S. Entangled solitons construction being introduced in the nonlinear spinor field model, the Einstein-Podolsky-Rosen (EPR) correlation is calculated and shown to coincide with the quantum mechanical one for the 1/2-spin particles. The concept of stochastic q-bits is used for quantum computing modelling
ERROR DETECTION USIN G BINARY BCH (255, 2 15, 5) CODES
Sahana C*, V Anandi
2015-01-01
Error - correction codes are the codes used to correct the errors occurred during the transmission of the data in the unreliable communication medi ums. Error detection is the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver. The idea behind these codes is to add redundancy bits to the data being transmitted so that even if some errors o ccur due to noise in the channel, the data can be ...
Random errors in egocentric networks.
Almquist, Zack W
2012-10-01
The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5-20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412
Optimality of Rate Balancing in Wireless Sensor Networks
Tarighati, Alla; Jalden, Joakim
2016-07-01
We consider the problem of distributed binary hypothesis testing in a parallel network topology where sensors independently observe some phenomenon and send a finite rate summary of their observations to a fusion center for the final decision. We explicitly consider a scenario under which (integer) rate messages are sent over an error free multiple access channel, modeled by a sum rate constraint at the fusion center. This problem was previously studied by Chamberland and Veeravalli, who provided sufficient conditions for the optimality of one bit sensor messages. Their result is however crucially dependent on the feasibility of having as many one bit sensors as the (integer) sum rate constraint of the multiple access channel, an assumption that can often not be satisfied in practice. This prompts us to consider the case of an a-priori limited number of sensors and we provide sufficient condition under which having no two sensors with rate difference more than one bit, so called rate balancing, is an optimal strategy with respect to the Bhattacharyya distance between the hypotheses at the input to the fusion center. We further discuss explicit observation models under which these sufficient conditions are satisfied.
Daly, Scott J.; Feng, Xiaofan
2003-01-01
Continuous tone, or "contone", imagery usually has 24 bits/pixel as a minimum, with eight bits each for the three primaries in typical displays. However, lower-cost displays constrain this number because of various system limitations. Conversely, high quality displays seek to achieve 9-10 bits/pixel/color, though there may be system bottlenecks limited at 8. The two main artifacts from reduced bit-depth are contouring and loss of amplitude detail; these can be prevented by dithering the image prior to these bit-depth losses. Early work in this area includes Roberts" noise modulation technique, Mista"s blue noise mask, Tyler"s technique of bit-stealing, and Mulligan"s use of the visual system"s spatiotemporal properties for spatiotemporal dithering. However, most halftoning/dithering work was primarily directed to displays at the lower end of bits/pixel (e.g., 1 bit as in halftoning) and higher ppi. Like Tyler, we approach the problem from the higher end of bits/pixel/color, say 6-8, and use available high frequency color content to generate even higher luminance amplitude resolution. Bit-depth extension with a high starting bit-depth (and often lower spatial resolution) changes the game substantially from halftoning experience. For example, complex algorithms like error diffusion and annealing are not needed, just the simple addition of noise. Instead of a spatial dither, it is better to use an amplitude dither, termed microdither by Pappas. We have looked at methods of generating the highest invisible opponent color spatiotemporal noise and other patterns, and have used Ahumada"s concept of equivalent input noise to guide our work. This paper will report on techniques and observations made in achieving contone quality on ~100 ppi 6 bits/pixel/color LCD displays with no visible dither patterns, noise, contours, or loss of amplitude detail at viewing distances as close as the near focus limit (~120 mm). These include the interaction of display nonlinearities and
Energy Technology Data Exchange (ETDEWEB)
Modeste Nguimdo, Romain, E-mail: Romain.Nguimdo@vub.ac.be [Applied Physics Research Group, APHY, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussel (Belgium); Tchitnga, Robert [Laboratory of Electronics, Automation and Signal Processing, Department of Physics, University of Dschang, P.O. Box 67, Dschang (Cameroon); Woafo, Paul [Laboratory of Modelling and Simulation in Engineering and Biological Physics, Faculty of Science, University of Yaoundé I, P.O. Box 812, Yaoundé (Cameroon)
2013-12-15
We numerically investigate the possibility of using a coupling to increase the complexity in simplest chaotic two-component electronic circuits operating at high frequency. We subsequently show that complex behaviors generated in such coupled systems, together with the post-processing are suitable for generating bit-streams which pass all the NIST tests for randomness. The electronic circuit is built up by unidirectionally coupling three two-component (one active and one passive) oscillators in a ring configuration through resistances. It turns out that, with such a coupling, high chaotic signals can be obtained. By extracting points at fixed interval of 10 ns (corresponding to a bit rate of 100 Mb/s) on such chaotic signals, each point being simultaneously converted in 16-bits (or 8-bits), we find that the binary sequence constructed by including the 10(or 2) least significant bits pass statistical tests of randomness, meaning that bit-streams with random properties can be achieved with an overall bit rate up to 10×100 Mb/s =1Gbit/s (or 2×100 Mb/s =200 Megabit/s). Moreover, by varying the bias voltages, we also investigate the parameter range for which more complex signals can be obtained. Besides being simple to implement, the two-component electronic circuit setup is very cheap as compared to optical and electro-optical systems.
A 9–12 GHz 5-bit active LO phase shifter with a new vector sum method
International Nuclear Information System (INIS)
This paper presents a 5-bit active LO phase shifter with a new vector sum method for 9–12 GHz applications. The 5-bit phase shifter is composed of four 3-bit sub phase shifters by adopting the new vector sum method, which reduces the requirements on the resolution of the variable gain amplifier (VGA). The variable gain function is realized by switch on/off parallel input transistor pairs rather than changing the bias current of the VGA, which avoids the linearity variation and drain-source voltage variation existing in the quadrature vector sum active phase shifter. The 5-bit active LO phase shifter is fabricated in TSMC 0.13 μm CMOS technology. The measured results show that the phase shifter achieves 5-bit phase shift accuracy. The average conversion gain for 32 phase states is −0.5 to 7 dB from 9 to 12 GHz. The RMS gain error and the RMS phase error are smaller than 0.8 dB and 4° respectively. The current consumption is 27.7 mA from a 1.2 V supply voltage. (semiconductor integrated circuits)
Institute of Scientific and Technical Information of China (English)
张临宏; 陈海涛; 唐丽蓉
2015-01-01
objectiveTo carry out the quality control circle activities to reduce outpatient pharmacy dispensing error rate, improve the quality of pharmacy service.Methodsthe QCC application in outpatient pharmacy quality control, analyze the related factors and research the preventive measures .Results The dispensing error rate dropped 29.73% after carrying out the quality control circle activity.ConclusionThe QCC activities can not only reduce the outpatient pharmacy dispensing error rate but also can increase the team cooperation ,the mutual communication and coordination and improve the working enthusiasm.%目的：降低门诊药房药品调剂内差，提升药房服务质量。方法：将品管圈活动应用于门诊药房药品调剂质量控制，分析调剂内差构成比及原因，制订相应对策。结果：调剂差错率在开展品管圈活动后下降了29.73%。结论：品管圈活动的开展不仅能够降低门诊药房发药差错率而且能够增加团队协作能力与工作人员之间的相互沟通协调能力，提高工作积极性。
Deep Diving into BitTorrent Locality
Cuevas, Ruben; Yang, Xiaoyuan; Siganos, Georgos; Rodriguez, Pablo
2009-01-01
Localizing BitTorrent traffic within an ISP in order to avoid excessive and often times unnecessary transit costs has recently received a lot of attention. Most existing work has focused on exploring the design space between bilateral cooperation schemes that require ISPs and P2P applications to talk to each other, and unilateral (client- or ISP-only) solutions that do not require cooperation. The above proposals have been evaluated in a hand full of ISPs with encouraging initial results. In this work we delve into the details of locality and attempt to answer yet unanswered questions like "\\emph{what are the boundaries of win-win outcomes for both ISPs and users from locality?}", "\\emph{what does the tradeoff between ISPs and users look like?}", and "\\emph{are some ISPs more in need of locality biasing than others?}". To answer the above questions we have conducted a large scale measurement study of BitTorrent demand demographics spanning 100K torrents with more than 3.5M clients at 9K ASes. We have also dev...
Object tracking based on bit-planes
Li, Na; Zhao, Xiangmo; Liu, Ying; Li, Daxiang; Wu, Shiqian; Zhao, Feng
2016-01-01
Visual object tracking is one of the most important components in computer vision. The main challenge for robust tracking is to handle illumination change, appearance modification, occlusion, motion blur, and pose variation. But in surveillance videos, factors such as low resolution, high levels of noise, and uneven illumination further increase the difficulty of tracking. To tackle this problem, an object tracking algorithm based on bit-planes is proposed. First, intensity and local binary pattern features represented by bit-planes are used to build two appearance models, respectively. Second, in the neighborhood of the estimated object location, a region that is most similar to the models is detected as the tracked object in the current frame. In the last step, the appearance models are updated with new tracking results in order to deal with environmental and object changes. Experimental results on several challenging video sequences demonstrate the superior performance of our tracker compared with six state-of-the-art tracking algorithms. Additionally, our tracker is more robust to low resolution, uneven illumination, and noisy video sequences.
Where the "it from bit" come from?
Foschini, Luigi
2013-01-01
In his 1989 essay, John Archibald Wheeler has tried to answer the eternal question of existence. He did it by searching for links between information, physics, and quanta. The main concept emerging from his essay is that "every physical quantity, every it, derives its ultimate significance from bits, binary yes-or-no indications". This concept has been summarized in the catchphrase "it from bit". In the Wheeler's essay, it is possible to read several times the echoes of the philosophy of Niels Bohr. The Danish physicist has pointed out how the quantum and relativistic physics - forcing us to abandon the anchor of the visual reference of common sense - have imposed a greater attention to the language. Bohr did not deny the physical reality, but recognizes that there is always need of a language no matter what a person wants to do. To put it as Carlo Sini, language is the first toolbox that man has at hands to analyze the experience. It is not a thought translated into words, because to think is to operate with...
A single channel, 6-bit 410-MS/s 3bits/stage asynchronous SAR ADC based on resistive DAC
Xue, Han; Qi, Wei; Huazhong, Yang; Hui, Wang
2015-05-01
This paper presents a single channel, low power 6-bit 410-MS/s asynchronous successive approximation register analog-to-digital converter (SAR ADC) for ultrawide bandwidth (UWB) communication, prototyped in a SMIC 65-nm process. Based on the 3 bits/stage structure, resistive DAC, and the modified asynchronous successive approximation register control logic, the proposed ADC attains a peak spurious-free dynamic range (SFDR) of 41.95 dB, and a signal-to-noise and distortion ratio (SNDR) of 28.52 dB for 370 MS/s. At the sampling rate of 410 MS/s, this design still performs well with a 40.71-dB SFDR and 30.02-dB SNDR. A four-input dynamic comparator is designed so as to decrease the power consumption. The measurement results indicate that this SAR ADC consumes 2.03 mW, corresponding to a figure of merit of 189.17 fJ/step at 410 MS/s. Project supported by the National Science Foundation for Young Scientists of China (No. 61306029) and the National High Technology Research and Development Program of China (No. 2013AA014103).
A single channel, 6-bit 410-MS/s 3bits/stage asynchronous SAR ADC based on resistive DAC
International Nuclear Information System (INIS)
This paper presents a single channel, low power 6-bit 410-MS/s asynchronous successive approximation register analog-to-digital converter (SAR ADC) for ultrawide bandwidth (UWB) communication, prototyped in a SMIC 65-nm process. Based on the 3 bits/stage structure, resistive DAC, and the modified asynchronous successive approximation register control logic, the proposed ADC attains a peak spurious-free dynamic range (SFDR) of 41.95 dB, and a signal-to-noise and distortion ratio (SNDR) of 28.52 dB for 370 MS/s. At the sampling rate of 410 MS/s, this design still performs well with a 40.71-dB SFDR and 30.02-dB SNDR. A four-input dynamic comparator is designed so as to decrease the power consumption. The measurement results indicate that this SAR ADC consumes 2.03 mW, corresponding to a figure of merit of 189.17 fJ/step at 410 MS/s. (paper)
Rao, T. R. N.; Seetharaman, G.; Feng, G. L.
1996-01-01
With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory
Error analysis in laparoscopic surgery
Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.
1998-06-01
Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.
Designing an efficient LT-code with unequal error protection for image transmission
S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.
2015-10-01
The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression
Compact FPGA-based beamformer using oversampled 1-bit A/D converters
DEFF Research Database (Denmark)
Tomov, Borislav Gueorguiev; Jensen, Jørgen Arendt
2005-01-01
quadrature components. That information is sufficient for presenting a B-mode image and creating a color flow map. The high sampling rate provides the necessary delay resolution for the focusing. The low channel data width (1-bit) makes it possible to construct a compact beamformer logic. The signal...
Development and characterisation of FPGA modems using forward error correction for FSOC
Mudge, Kerry A.; Grant, Kenneth J.; Clare, Bradley A.; Biggs, Colin L.; Cowley, William G.; Manning, Sean; Lechner, Gottfried
2016-05-01
In this paper we report on the performance of a free-space optical communications (FSOC) modem implemented in FPGA, with data rate variable up to 60 Mbps. To combat the effects of atmospheric scintillation, a 7/8 rate low density parity check (LDPC) forward error correction is implemented along with custom bit and frame synchronisation and a variable length interleaver. We report on the systematic performance evaluation of an optical communications link employing the FPGA modems using a laboratory test-bed to simulate the effects of atmospheric turbulence. Log-normal fading is imposed onto the transmitted free-space beam using a custom LabVIEW program and an acoustic-optic modulator. The scintillation index, transmitted optical power and the scintillation bandwidth can all be independently varied allowing testing over a wide range of optical channel conditions. In particular, bit-error-ratio (BER) performance for different interleaver lengths is investigated as a function of the scintillation bandwidth. The laboratory results are compared to field measurements over 1.5km.
A Fast Dynamic 64-bit Comparator with Small Transistor Count
Directory of Open Access Journals (Sweden)
Chua-Chin Wang
2002-01-01
Full Text Available In this paper, we propose a 64-bit fast dynamic CMOS comparator with small transistor count. Major features of the proposed comparator are the rearrangement and re-ordering of transistors in the evaluation block of a dynamic cell, and the insertion of a weak n feedback inverter, which helps the pull-down operation to ground. The simulation results given by pre-layout tools, e.g. HSPICE, and post-layout tools, e.g. TimeMill, reveal that the delay is around 2.5 ns while the operating clock rate reaches 100 MHz. A physical chip is fabricated to verify the correctness of our design by using UMC (United Microelectronics Company 0.5 μm (2P2M technology.
Errors in thermochromic liquid crystal thermometry
International Nuclear Information System (INIS)
This article experimentally investigates and assesses the errors that may be incurred in the hue-based thermochromic liquid crystal thermochromic liquid crystal (TLC) method, and their causes. The errors include response time, hysteresis, aging, surrounding illumination disturbance, direct illumination and viewing angle, amount of light into the camera, TLC thickness, digital resolution of the image conversion system, and measurement noise. Some of the main conclusions are that: (1) The 3x8 bits digital representation of the red green and blue TLC color values produces a temperature measurement error of typically 1% of the TLC effective temperature range, (2) an eight-fold variation of the light intensity into the camera produced variations, which were not discernable from the digital resolution error, (3) this temperature depends on the TLC film thickness, and (4) thicker films are less susceptible to aging and thickness nonuniformities
Errors in thermochromic liquid crystal thermometry
Wiberg, Roland; Lior, Noam
2004-09-01
This article experimentally investigates and assesses the errors that may be incurred in the hue-based thermochromic liquid crystal thermochromic liquid crystal (TLC) method, and their causes. The errors include response time, hysteresis, aging, surrounding illumination disturbance, direct illumination and viewing angle, amount of light into the camera, TLC thickness, digital resolution of the image conversion system, and measurement noise. Some of the main conclusions are that: (1) The 3×8 bits digital representation of the red green and blue TLC color values produces a temperature measurement error of typically 1% of the TLC effective temperature range, (2) an eight-fold variation of the light intensity into the camera produced variations, which were not discernable from the digital resolution error, (3) this temperature depends on the TLC film thickness, and (4) thicker films are less susceptible to aging and thickness nonuniformities.
Space telemetry degradation due to Manchester data asymmetry induced carrier tracking phase error
Nguyen, Tien M.
1991-01-01
The deleterious effects that the Manchester (or Bi-phi) data asymmetry has on the performance of phase-modulated residual carrier communication systems are analyzed. Expressions for the power spectral density of an asymmetric Manchester data stream, the interference-to-carrier signal power ratio (I/C), and the error probability performance are derived. Since data asymmetry can cause undesired spectral components at the carrier frequency, the I/C ratio is given as a function of both the data asymmetry and the telemetry modulation index. Also presented are the data asymmetry and asymmetry-induced carrier tracking loop and the system bit-error rate to various parameters of the models.
On the Performance of Multihop Heterodyne FSO Systems With Pointing Errors
Zedini, Emna
2015-03-30
This paper reports the end-to-end performance analysis of a multihop free-space optical system with amplify-and-forward (AF) channel-state-information (CSI)-assisted or fixed-gain relays using heterodyne detection over Gamma–Gamma turbulence fading with pointing error impairments. In particular, we derive new closed-form results for the average bit error rate (BER) of a variety of binary modulation schemes and the ergodic capacity in terms of the Meijer\\'s G function. We then offer new accurate asymptotic results for the average BER and the ergodic capacity at high SNR values in terms of simple elementary functions. For the capacity, novel asymptotic results at low and high average SNR regimes are also obtained via an alternative moments-based approach. All analytical results are verified via computer-based Monte-Carlo simulations.
The Influence of Gaussian Signaling Approximation on Error Performance in Cellular Networks
Afify, Laila H.
2015-08-18
Stochastic geometry analysis for cellular networks is mostly limited to outage probability and ergodic rate, which abstracts many important wireless communication aspects. Recently, a novel technique based on the Equivalent-in-Distribution (EiD) approach is proposed to extend the analysis to capture these metrics and analyze bit error probability (BEP) and symbol error probability (SEP). However, the EiD approach considerably increases the complexity of the analysis. In this paper, we propose an approximate yet accurate framework, that is also able to capture fine wireless communication details similar to the EiD approach, but with simpler analysis. The proposed methodology is verified against the exact EiD analysis in both downlink and uplink cellular networks scenarios.
Word line program disturbance based data retention error recovery strategy for MLC NAND Flash
Ma, Haozhi; Pan, Liyang; Song, Changlai; Gao, Zhongyi; Wu, Dong; Xu, Jun
2015-07-01
NAND Flash has been widely used as storage solutions for portable system due to improvement on data throughput, power consumption and mechanical reliability. However, NAND Flash presents inevitable decline in reliability due to scaling down and multi-level cell (MLC) technology. High data retention error rate in highly stressed blocks causes a trend of stronger ECC deployed in system, with higher hardware overhead and spare bits cost. In this paper, a word line program disturbance (WPD) based data retention error recovery strategy, which induces extra electron injection to compensate floating gate electron leakage during long retention time, is proposed to reduce the data retention error rate and improve the retention reliability of highly scaled MLC NAND Flash memories. The proposed strategy is applied on 2×-nm MLC NAND Flash and the device one-year retention error rate after 3 K, 4 K, 5 K and 6 K P/E cycled decreases by 75.7%, 79.3%, 82.3% and 83.3%, respectively.
International Nuclear Information System (INIS)
The Westinghouse SA3823 64K E2PROM radiation-hardened SONOS non-volatile memory exhibited a single-event-upset (SEU) threshold in the read mode of 60 MeV-cm2/mg and 40 MeV-cm2/mg for data latch errors. The minimum threshold for address latch errors was 35 MeV-cm2/mg. Hard errors were observed with Kr at Vp = 8.5 V and with Xe at Programming voltages (Vp) as low as 7.5 V. NO hard errors were observed with Cu at any angle up to Vp = II V. The system specification of no hard errors for Ar ions or lighter was exceeded. No single-event latchup (SEL) was observed in these devices for the conditions examined. The Analog Devices AD7876 12bit analog-to-digital converter (ADC) had an upset threshold of 2 MeV-cm2/mg for all values of input voltage (Vin), while the worst-case saturation cross section of ∼2 x 10-3 cm2 as measured with Vin = 4.49 V. No latchup was observed. The Intel 82C527 serial communications controller exhibited a minimum threshold for upset of 2 MeV-cm4/mLi, and a saturation cross section of about 5 x 10-4cm2. For latchup the minimum threshold was measured at 17 MeV-cm2/mg, and cross section saturated at about 3 x 10-4 cm2. Error rates for the expected applications are presented
Hill Cipher and Least Significant Bit for Image Messaging Security
Directory of Open Access Journals (Sweden)
Muhammad Husnul Arif
2016-02-01
Full Text Available Exchange of information through cyberspace has many benefits as an example fast estimated time, unlimited physical distance and space limits, etc. But in these activities can also pose a security risk for confidential information. It is necessary for the safety that can be used to protect data transmitted through the Internet. Encryption algorithm that used to encrypt message to be sent (plaintext into messages that have been randomized (ciphertext is cryptography and steganography algorithms. In application of cryptographic techniques that will be used is Hill Cipher. The technique is combined with steganography techniques Least Significant Bit. The result of merging techniques can maintain the confidentiality of messages because people who do not know the secret key used will be difficult to get the message contained in the stego-image and the image that has been inserted can not be used as a cover image. Message successfully inserted and extracted back on all samples with a good image formats * .bmp, * .png , * .jpg at a resolution of 512 x 512 pixels , 256 x 256 pixels. MSE and PSNR results are not influenced file format or file size, but influenced by dimensions of image. The larger dimensions of the image, then the smaller MSE that means error of image gets smaller.
Bit-Optimal Lempel-Ziv compression
Ferragina, Paolo; Venturini, Rossano
2008-01-01
One of the most famous and investigated lossless data-compression scheme is the one introduced by Lempel and Ziv about 40 years ago. This compression scheme is known as "dictionary-based compression" and consists of squeezing an input string by replacing some of its substrings with (shorter) codewords which are actually pointers to a dictionary of phrases built as the string is processed. Surprisingly enough, although many fundamental results are nowadays known about upper bounds on the speed and effectiveness of this compression process and references therein), ``we are not aware of any parsing scheme that achieves optimality when the LZ77-dictionary is in use under any constraint on the codewords other than being of equal length'' [N. Rajpoot and C. Sahinalp. Handbook of Lossless Data Compression, chapter Dictionary-based data compression. Academic Press, 2002. pag. 159]. Here optimality means to achieve the minimum number of bits in compressing each individual input string, without any assumption on its ge...
Efficient Algorithms for Optimal 4-Bit Reversible Logic System Synthesis
Directory of Open Access Journals (Sweden)
Zhiqiang Li
2013-01-01
Full Text Available Owing to the exponential nature of the memory and run-time complexity, many methods can only synthesize 3-bit reversible circuits and cannot synthesize 4-bit reversible circuits well. We mainly absorb the ideas of our 3-bit synthesis algorithms based on hash table and present the efficient algorithms which can construct almost all optimal 4-bit reversible logic circuits with many types of gates and at mini-length cost based on constructing the shortest coding and the specific topological compression; thus, the lossless compression ratio of the space of n-bit circuits reaches near 2×n!. This paper presents the first work to create all 3120218828 optimal 4-bit reversible circuits with up to 8 gates for the CNT (Controlled-NOT gate, NOT gate, and Toffoli gate library, and it can quickly achieve 16 steps through specific cascading created circuits.
Temperature-compensated 8-bit column driver for AMLCD
Dingwall, Andrew G. F.; Lin, Mark L.
1995-06-01
An all-digital, 5 V input, 50 Mhz bandwidth, 10-bit resolution, 128- column, AMLCD column driver IC has been designed and tested. The 10-bit design can enhance display definition over 6-bit nd 8-bit column drivers. Precision is realized with on-chip, switched-capacitor DACs plus transparently auto-offset-calibrated, opamp outputs. Increased resolution permits multiple 10-bit digital gamma remappings in EPROMs over temperature. Driver IC features include externally programmable number of output column, bi-directional digital data shifting, user- defined row/column/pixel/frame inversion, power management, timing control for daisy-chained column drivers, and digital bit inversion. The architecture uses fewer reference power supplies.
Institute of Scientific and Technical Information of China (English)
Zhu Xiaoshi; Chen Chixiao; Xu Jialiang; Ye Fan; Ren Junyan
2013-01-01
A sampling switch with an embedded digital-to-skew converter (DSC) is presented.The proposed switch eliminates time-interleaved ADCs' skews by adjusting the boosted voltage.A similar bridged capacitors' charge sharing structure is used to minimize the area.The circuit is fabricated in a 0.18μm CMOS process and achieves sub-1 ps resolution and 200 ps timing range at a rate of 100 MS/s.The power consumption is 430 μW at maximum.The measurement result also includes a 2-channel 14-bit 100 MS/s time-interleaved ADCs (TI-ADCs) with the proposed DSC switch's demonstration.This scheme is widely applicable for the clock skew and aperture error calibration demanded in TI-ADCs and SHA-less ADCs.
International Nuclear Information System (INIS)
A sampling switch with an embedded digital-to-skew converter (DSC) is presented. The proposed switch eliminates time-interleaved ADCs' skews by adjusting the boosted voltage. A similar bridged capacitors' charge sharing structure is used to minimize the area. The circuit is fabricated in a 0.18 μm CMOS process and achieves sub-1 ps resolution and 200 ps timing range at a rate of 100 MS/s. The power consumption is 430 μW at maximum. The measurement result also includes a 2-channel 14-bit 100 MS/s time-interleaved ADCs (TI-ADCs) with the proposed DSC switch's demonstration. This scheme is widely applicable for the clock skew and aperture error calibration demanded in TI-ADCs and SHA-less ADCs. (semiconductor integrated circuits)
Interpreting Cross-correlations of One-bit Filtered Seismic Noise
Hanasoge, Shravan
2013-01-01
Seismic noise, generated by oceanic microseisms and other sources, illuminates the crust in a manner different from tectonic sources, and therefore provides independent information. The primary measurable is the two-point cross-correlation, evaluated using traces recorded at a pair of seismometers over a finite-time interval. However, raw seismic traces contain intermittent large-amplitude perturbations arising from tectonic activity and instrumental errors, which may corrupt the estimated cross-correlations of microseismic fluctuations. In order to diminish the impact of these perturbations, the recorded traces are filtered using the nonlinear one-bit digitizer, which replaces the measurement by its sign. Previous theory shows that for stationary Gaussian-distributed seismic noise fluctuations one-bit and raw correlation functions are related by a simple invertible transformation. Here we extend this to show that the simple correspondence between these two correlation techniques remains valid for {\\it non-st...
Numerical optimization of writer and media for bit patterned magnetic recording
Kovacs, A; Schabes, M E; Schrefl, T
2016-01-01
In this work we present a micromagnetic study of the performance potential of bit-patterned (BP) magnetic recording media via joint optimization of the design of the media and of the magnetic write heads. Because the design space is large and complex, we developed a novel computational framework suitable for parallel implementation on compute clusters. Our technique combines advanced global optimization algorithms and finite-element micromagnetic solvers. Targeting data bit densities of $4\\mathrm{Tb}/\\mathrm{in}^2$, we optimize designs for centered, staggered, and shingled BP writing. The magnetization dynamics of the switching of the exchange-coupled composite BP islands of the media is treated micromagnetically. Our simulation framework takes into account not only the dynamics of on-track errors but also of the thermally induced adjacent-track erasure. With co-optimized write heads, the results show superior performance of shingled BP magnetic recording where we identify two particular designs achieving wri...
CTracker : a Distributed BitTorrent Tracker Based on Chimera
Jimenez, Raúl; Knutsson, Björn
2008-01-01
There are three major open issues in the BitTorrent peer discovery system, which are not solved by any of the currently deployed solutions. These issues seriously threaten BitTorrent's scalability, especially when considering that mainstream content distributors could start using BitTorrent for distributing content to millions of users simultaneously in the near future. In this paper these issues are addressed by proposing a topology-aware distributed tracking system as a replacement for both...
Information Hiding Using Least Significant Bit Steganography and Cryptography
Shailender Gupta; Ankur Goyal; Bharat Bhushan
2012-01-01
Steganalysis is the art of detecting the message's existence and blockading the covert communication. Various steganography techniques have been proposed in literature. The Least Significant Bit (LSB) steganography is one such technique in which least significant bit of the image is replaced with data bit. As this method is vulnerable to steganalysis so as to make it more secure we encrypt the raw data before embedding it in the image. Though the encryption process increases the time complexi...
The Extensive Bit-level Encryption System (EBES)
Satyaki Roy
2013-01-01
In the present work, the Extensive Bit-level Encryption System (EBES), a bit-level encryption mechanism has been introduced. It is a symmetric key cryptographic technique that combines advanced randomization of bits and serial bitwise feedback generation modules. After repeated testing with a variety of test inputs, frequency analysis, it would be safe to conclude that the algorithm is free from standard cryptographic attacks. It can effectively encrypt short messages and passwords.
Development and testing of a Mudjet-augmented PDC bit.
Energy Technology Data Exchange (ETDEWEB)
Black, Alan (TerraTek, Inc.); Chahine, Georges (DynaFlow, Inc.); Raymond, David Wayne; Matthews, Oliver (Security DBS); Grossman, James W.; Bertagnolli, Ken (US Synthetic); Vail, Michael (US Synthetic)
2006-01-01
This report describes a project to develop technology to integrate passively pulsating, cavitating nozzles within Polycrystalline Diamond Compact (PDC) bits for use with conventional rig pressures to improve the rock-cutting process in geothermal formations. The hydraulic horsepower on a conventional drill rig is significantly greater than that delivered to the rock through bit rotation. This project seeks to leverage this hydraulic resource to extend PDC bits to geothermal drilling.
BitTorrent Swarm Analysis through Automation and Enhanced Logging
R˘azvan Deaconescu; Marius Sandu-Popa; Adriana Dr˘aghici; Nicolae T˘apus
2011-01-01
Peer-to-Peer protocols currently form the most heavily used protocol class in the Internet, with BitTorrent, the most popular protocol for content distribution, as its flagship. A high number of studies and investigations have been undertaken to measure, analyse and improve the inner workings of the BitTorrent protocol. Approaches such as tracker message analysis, network probing and packet sniffing have been deployed to understand and enhance BitTorrent's internal behaviour. In this paper we...
Device for lubricating sealed support of a cutter bit
Energy Technology Data Exchange (ETDEWEB)
Grushkin, B.N.; Balabashin, B.P.; Popov, L.N.; Spivak, A.I.; Yudin, A.S.; Zhulayev, V.P.
1982-01-01
A device is proposed for lubricating the sealed support of a cutter bit. It contains a vessel arranged in the bit clamp for supplying the lubricant material, a pump with piston and a closed system of lubricant-supplying channels. In order to improve the efficiency of lubrication during drilling with above-bit shock absorbers by accelerating the circulation of the lubricating material, the pump piston is installed with the potential of interacting with the shock absorber.
Gravitational Entropy and String Bits on the Stretched Horizon
Halyo, E
2003-01-01
We show that the entropy of Schwarzschild black holes in any dimension can be described by a gas of free string bits at the stretched horizon. The number of string bits is equal to the black hole entropy and energy dependent. For an asymptotic observer the bit gas is at the Hawking temperature. We show that the same description is also valid for de Sitter space--times in any dimension.
减少砂砾石料场储量计算的误差率%Reduction of Error Rate in Reserve Calculation of Gravel Quarry
Institute of Scientific and Technical Information of China (English)
向能武; 肖东佑; 司马世华
2014-01-01
Aggregate quarry reserves survey accuracy of Pakistan Karot Hydropower Station Project directly affects investment decisions of construction projects .Project department established QC team for improving original investigation techniques, reducing calculation error , and providing accurate data for project design .The team conducts standard management on tackling results .‘CATIA V5 based Three dimensional Geological Quarry Modeling Technique ’ is revised and approved by experts , thereby ensuring investigation and calculation accuracy of similar project quarry .%巴基斯坦Karot水电站工程骨料场储量勘测精准度，直接影响工程建设的投资决策。项目部成立QC小组，改进原勘测技术，减少计算误差，为工程设计提供了精确数据。小组对攻关成果进行标准化管理，重新修订“基于CATIA V5的三维地质料场建模技术”并通过专家评审，从而保证了类似工程料场的勘测计算精度。
Energy Technology Data Exchange (ETDEWEB)
Beddo, M.E.; Spinka, H.; Underwood, D.G.
1992-08-14
Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.
International Nuclear Information System (INIS)
Studies of inclusive direct-γ production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between √s = 50 and 500 GeV. Also, rates were computed for direct-γ + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made
BitPredator: A Discovery Algorithm for BitTorrent Initial Seeders and Peers
Energy Technology Data Exchange (ETDEWEB)
Borges, Raymond [West Virginia University; Patton, Robert M [ORNL; Kettani, Houssain [Polytechnic University of Puerto Rico (PUPR); Masalmah, Yahya [Universidad del Turabo
2011-01-01
There is a large amount of illegal content being replicated through peer-to-peer (P2P) networks where BitTorrent is dominant; therefore, a framework to profile and police it is needed. The goal of this work is to explore the behavior of initial seeds and highly active peers to develop techniques to correctly identify them. We intend to establish a new methodology and software framework for profiling BitTorrent peers. This involves three steps: crawling torrent indexers for keywords in recently added torrents using Really Simple Syndication protocol (RSS), querying torrent trackers for peer list data and verifying Internet Protocol (IP) addresses from peer lists. We verify IPs using active monitoring methods. Peer behavior is evaluated and modeled using bitfield message responses. We also design a tool to profile worldwide file distribution by mapping IP-to-geolocation and linking to WHOIS server information in Google Earth.
Medication errors: prescribing faults and prescription errors
Velo, Giampaolo P; Minuz, Pietro
2009-01-01
Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients.Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common.Inadequate knowledge or competence and ...
The Deliverability of the BIT Programme at Lahti UAS in Training BIT Experts
Nghiem, Duc Long
2014-01-01
Information Technology has become a vital and indispensable part of business in every industry. In fact, IT is the primary factor that differentiates many businesses from their competitors. Organizations usually rely on IT for several strategic business solutions such as communication, information management, customer relationship management, and marketing. In the near future, the business labor force will anticipate a rising demand in BIT experts who possess both business expertise and IT sk...