An error criterion for determining sampling rates in closed-loop control systems
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
Payment Error Rate Measurement (PERM)
U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...
Comprehensive Error Rate Testing (CERT)
U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...
Wojtas, H
2004-07-01
The main source of errors in measuring the corrosion rate of rebars on site is a non-uniform current distribution between the small counter electrode (CE) on the concrete surface and the large rebar network. Guard ring electrodes (GEs) are used in an attempt to confine the excitation current within a defined area. In order to better understand the functioning of modulated guard ring electrode and to assess its effectiveness in eliminating errors due to lateral spread of current signal from the small CE, measurements of the polarisation resistance performed on a concrete beam have been numerically simulated. Effect of parameters such as rebar corrosion activity, concrete resistivity, concrete cover depth and size of the corroding area on errors in the estimation of polarisation resistance of a single rebar has been examined. The results indicate that modulated GE arrangement fails to confine the lateral spread of the CE current within a constant area. Using the constant diameter of confinement for the calculation of corrosion rate may lead to serious errors when test conditions change. When high corrosion activity of rebar and/or local corrosion occur, the use of the modulated GE confinement may lead to significant underestimation of the corrosion rate.
Monitoring Error Rates In Illumina Sequencing
Manley, Leigh J.; Ma, Duanduan; Levine, Stuart S.
2016-01-01
Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR’s unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted. PMID:27672352
Gardes, B.; Chabaud, P.-Y.; Guterman, P.
2012-09-01
In the CoRoT exoplanet field of view, photometric measurements are obtained by aperture integration using a generic collection of masks. The total flux held within the photometric mask may be split in two parts, the target flux itself and the flux due to the nearest neighbours considered as contaminants. So far ExoDat (http://cesam.oamp.fr/exodat) gives a rough estimate of the contamination rate for all potential exoplanet targets (level-0) based on generic PSF shapes built before CoRoT launch. Here, we present the updated estimate of the contamination rate (level-1) with its associated error. This estimate is done for each target observed by CoRoT in the exoplanet channel using a new catalog of PSF built from the first available flight images and taking into account the line of sight of the satellite (i.e. the satellite orientation).
Error-associated behaviors and error rates for robotic geology
Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin
2004-01-01
This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.
Error-associated behaviors and error rates for robotic geology
Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin
2004-01-01
This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.
Structure determines medication errors in nursing units: a mechanistic approach.
Hung, Chang-Chiao; Lee, Bih-O; Tsai, Shu-Ling; Tseng, Yun Shan; Chang, Chia-Hao
2015-03-01
Medication errors have long been considered critical in global health care systems. However, few studies have been conducted to explore the effects of nursing unit structure on medication errors. The purpose of this study, therefore, was to determine the effects of structural factors on medication errors in nursing units. A total of 977 staff nurses and 62 head nurses participated in this cross-sectional design study. The findings show that professional autonomy (β = .53, t = 6.03, p nursing experts (β = .52, t = 5.99, p medication error rates. This study shows that the structural factors influence medication administration and the mechanistic approach is specifically in relation of low medication error rates. The author suggests that head nurses should consider strategies that require adjustments to unit control mechanisms.
Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure
2013-09-01
High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.
Case study: error rates and paperwork design.
Drury, C G
1998-01-01
A job instruction document, or workcard, for civil aircraft maintenance produced a number of paperwork errors when used operationally. The design of the workcard was compared to the guidelines of Patel et al [1994, Applied Ergonomics, 25 (5), 286-293]. All of the errors occurred in work instructions which did not meet these guidelines, demonstrating that the design of documentation does affect operational performance.
Logical error rate in the Pauli twirling approximation.
Katabarwa, Amara; Geller, Michael R
2015-09-30
The performance of error correction protocols are necessary for understanding the operation of potential quantum computers, but this requires physical error models that can be simulated efficiently with classical computers. The Gottesmann-Knill theorem guarantees a class of such error models. Of these, one of the simplest is the Pauli twirling approximation (PTA), which is obtained by twirling an arbitrary completely positive error channel over the Pauli basis, resulting in a Pauli channel. In this work, we test the PTA's accuracy at predicting the logical error rate by simulating the 5-qubit code using a 9-qubit circuit with realistic decoherence and unitary gate errors. We find evidence for good agreement with exact simulation, with the PTA overestimating the logical error rate by a factor of 2 to 3. Our results suggest that the PTA is a reliable predictor of the logical error rate, at least for low-distance codes.
Gravity field determination and error assessment techniques
Yuan, D. N.; Shum, C. K.; Tapley, B. D.
1989-01-01
Linear estimation theory, along with a new technique to compute relative data weights, was applied to the determination of the Earth's geopotential field and other geophysical model parameters using a combination of satellite ground-based tracking data, satellite altimetry data, and the surface gravimetry data. The relative data weights for the inhomogeneous data sets are estimated simultaneously with the gravity field and other geophysical and orbit parameters in a least squares approach to produce the University of Texas gravity field models. New techniques to perform calibration of the formal covariance matrix for the geopotential solution were developed to obtain a reliable gravity field error estimate. Different techniques, which include orbit residual analysis, surface gravity anomaly residual analysis, subset gravity solution comparisons and consider covariance analysis, were applied to investigate the reliability of the calibration.
Error rate information in attention allocation pilot models
Faulkner, W. H.; Onstott, E. D.
1977-01-01
The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.
Interbank overnight rate determinants
Kožul Nataša
2015-01-01
Full Text Available Reserve requirement is a regulation of most world's central banks, whereby commercial banks must hold a certain fraction of customer deposits in reserves, either deposited at the central bank or in the bank vaults. While these reserves are calculated periodically, banks usually manage their books daily, which may result in reserve shortfall or surplus. This phenomenon has led to the emergence of the interbank market where banks transact with one another, trading interest rate instruments of various maturities. This paper focuses on the overnight interest rate, as it is assumed to be an indicator of the central bank's policy. Moreover, as the overnight rate is included in the yield curve construction, it implicitly influences the rates for all longer maturities. Finally, as an equilibrium in the reserve supply and demand, movements in the overnight interest rate reflect the dynamics in the interbank market. Here, the main interbank indices are described, before discussing some important features of the overnight rate, and the factors underlying its movements.
Total Dose Effects on Error Rates in Linear Bipolar Systems
Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent
2007-01-01
The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.
A Six Sigma approach to the rate and clinical effect of registration errors in a laboratory.
Vanker, Naadira; van Wyk, Johan; Zemlin, Annalise E; Erasmus, Rajiv T
2010-05-01
Laboratory errors made during the pre-analytical phase can have an impact on clinical care. Quality management tools such as Six Sigma may help improve error rates. To use elements of a Six Sigma model to establish the error rate of test registration onto the laboratory information system (LIS), and to deduce the potential clinical impact of these errors. In this retrospective study, test request forms were compared with the tests registered onto the LIS, and all errors were noted before being rectified. The error rate was calculated. The corresponding patient records were then examined to determine the actual outcome, and to deduce the potential clinical impact of the registration errors. Of the 47 543 tests requested, 72 errors were noted, resulting in an error rate of 0.151%, equating to a sigma score of 4.46. The patient records reviewed indicated that these errors could, in various ways, have impacted on clinical care. This study highlights the clinical effect of errors made during the pre-analytical phase of the laboratory testing process. Reduction of errors may be achieved through implementation of a Six Sigma programme.
Forecasting the Euro exchange rate using vector error correction models
Aarle, B. van; Bos, M.; Hlouskova, J.
2000-01-01
Forecasting the Euro Exchange Rate Using Vector Error Correction Models. — This paper presents an exchange rate model for the Euro exchange rates of four major currencies, namely the US dollar, the British pound, the Japanese yen and the Swiss franc. The model is based on the monetary approach of ex
Error-rate performance analysis of opportunistic regenerative relaying
Tourki, Kamel
2011-09-01
In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.
Medication Error Reporting Rate and its Barriers and Facilitators among Nurses
Snor Bayazidi
2012-11-01
Full Text Available Introduction: Medication errors are among the most prevalent medical errors leading to morbidity and mortality. Effective prevention of this type of errors depends on the presence of a well-organized reporting system. The purpose of this study was to explore medication error reporting rate and its barriers and facilitators among nurses in teaching hospitals of Urmia University of Medical Sciences (Iran.Methods: In a descriptive study in 2011, 733 nurses working in Urmia teaching hospitals were included. Data was collected using a questionnaire based on Haddon matrix. The questionnaire consisted of three items about medication error reporting rate, eight items on barriers of reporting, and seven items on facilitators of reporting. The collected data was analyzed by descriptive statistics in SPSS14.Results:The rate of reporting medication errors among nurses was far less than medication errors they had made. Nurses perceived that the most important barriers of reporting medication errors were blaming individuals instead of the system, consequences of reporting errors, and fear of reprimand and punishment. Some facilitating factors were also determined. Conclusion: Overall, the rate of medication errors was found to be much more than what had been reported by nurses. Therefore, it is suggested to train nurses and hospital administrators on facilitators and barriers of error reporting in order to enhance patient safety.
Framed bit error rate testing for 100G ethernet equipment
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert
2010-01-01
The Internet users behavioural patterns are migrating towards bandwidth-intensive applications, which require a corresponding capacity extension. The emerging 100 Gigabit Ethernet (GE) technology is a promising candidate for providing a ten-fold increase of todays available Internet transmission...... rate. As the need for 100 Gigabit Ethernet equipment rises, so does the need for equipment, which can properly test these systems during development, deployment and use. This paper presents early results from a work-in-progress academia-industry collaboration project and elaborates on the challenges...... of performing bit error rate testing at 100Gbps. In particular, we show how Bit Error Rate Testing (BERT) can be performed over an aggregated 100G Attachment Unit Interface (CAUI) by encapsulating the test data in Ethernet frames at line speed. Our results show that framed bit error rate testing can...
Impact of translational error-induced and error-free misfolding on the rate of protein evolution.
Yang, Jian-Rong; Zhuang, Shi-Mei; Zhang, Jianzhi
2010-10-19
What determines the rate of protein evolution is a fundamental question in biology. Recent genomic studies revealed a surprisingly strong anticorrelation between the expression level of a protein and its rate of sequence evolution. This observation is currently explained by the translational robustness hypothesis in which the toxicity of translational error-induced protein misfolding selects for higher translational robustness of more abundant proteins, which constrains sequence evolution. However, the impact of error-free protein misfolding has not been evaluated. We estimate that a non-negligible fraction of misfolded proteins are error free and demonstrate by a molecular-level evolutionary simulation that selection against protein misfolding results in a greater reduction of error-free misfolding than error-induced misfolding. Thus, an overarching protein-misfolding-avoidance hypothesis that includes both sources of misfolding is superior to the translational robustness hypothesis. We show that misfolding-minimizing amino acids are preferentially used in highly abundant yeast proteins and that these residues are evolutionarily more conserved than other residues of the same proteins. These findings provide unambiguous support to the role of protein-misfolding-avoidance in determining the rate of protein sequence evolution.
Individual Differences and Rating Errors in First Impressions of Psychopathy
Christopher T. A. Gillen
2016-10-01
Full Text Available The current study is the first to investigate whether individual differences in personality are related to improved first impression accuracy when appraising psychopathy in female offenders from thin-slices of information. The study also investigated the types of errors laypeople make when forming these judgments. Sixty-seven undergraduates assessed 22 offenders on their level of psychopathy, violence, likability, and attractiveness. Psychopathy rating accuracy improved as rater extroversion-sociability and agreeableness increased and when neuroticism and lifestyle and antisocial characteristics decreased. These results suggest that traits associated with nonverbal rating accuracy or social functioning may be important in threat detection. Raters also made errors consistent with error management theory, suggesting that laypeople overappraise danger when rating psychopathy.
Modeling of Bit Error Rate in Cascaded 2R Regenerators
Öhman, Filip; Mørk, Jesper
2006-01-01
This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments and the rege......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...
Neighbourhood effects on error rates in speech production.
Stemberger, Joseph Paul
2004-01-01
Models of speech production differ on whether phonological neighbourhoods should affect processing, and on whether effects should be facilitatory or inhibitory. Inhibitory effects of large neighbourhoods have been argued to underlie apparent anti-frequency effects, whereby high-frequency default features are more prone to mispronunciation errors than low-frequency nondefault features. Data from the original SLIPs experiments that found apparent anti-frequency effects are analysed for neighbourhood effects. Effects are facilitatory: errors are significantly less likely for words with large numbers of neighbours that share the characteristic that is being primed for error ("friends"). Words in the neighbourhood that do not share the target characteristic ("enemies") have little effect on error rates. Neighbourhood effects do not underlie the apparent anti-frequency effects. Implications for models of speech production are discussed.
The 95% confidence intervals of error rates and discriminant coefficients
Shuichi Shinmura
2015-02-01
Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.
Cosmic Ray Spectral Deformation Caused by Energy Determination Errors
Carlson, Per J; Carlson, Per; Wannemark, Conny
2005-01-01
Using simulation methods, distortion effects on energy spectra caused by errors in the energy determination have been investigated. For cosmic ray proton spectra, falling steeply with kinetic energy E as E-2.7, significant effects appear. When magnetic spectrometers are used to determine the energy, the relative error increases linearly with the energy and distortions with a sinusoidal form appear starting at an energy that depends significantly on the error distribution but at an energy lower than that corresponding to the Maximum Detectable Rigidity of the spectrometer. The effect should be taken into consideration when comparing data from different experiments, often having different error distributions.
Controlling the Type I Error Rate in Stepwise Regression Analysis.
Pohlmann, John T.
Three procedures used to control Type I error rate in stepwise regression analysis are forward selection, backward elimination, and true stepwise. In the forward selection method, a model of the dependent variable is formed by choosing the single best predictor; then the second predictor which makes the strongest contribution to the prediction of…
Assessment of salivary flow rate: biologic variation and measure error.
Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.
2004-01-01
OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated measurem
Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher
2017-05-18
Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.
DNA barcoding: error rates based on comprehensive sampling.
Christopher P Meyer
2005-12-01
Full Text Available DNA barcoding has attracted attention with promises to aid in species identification and discovery; however, few well-sampled datasets are available to test its performance. We provide the first examination of barcoding performance in a comprehensively sampled, diverse group (cypraeid marine gastropods, or cowries. We utilize previous methods for testing performance and employ a novel phylogenetic approach to calculate intraspecific variation and interspecific divergence. Error rates are estimated for (1 identifying samples against a well-characterized phylogeny, and (2 assisting in species discovery for partially known groups. We find that the lowest overall error for species identification is 4%. In contrast, barcoding performs poorly in incompletely sampled groups. Here, species delineation relies on the use of thresholds, set to differentiate between intraspecific variation and interspecific divergence. Whereas proponents envision a "barcoding gap" between the two, we find substantial overlap, leading to minimal error rates of approximately 17% in cowries. Moreover, error rates double if only traditionally recognized species are analyzed. Thus, DNA barcoding holds promise for identification in taxonomically well-understood and thoroughly sampled clades. However, the use of thresholds does not bode well for delineating closely related species in taxonomically understudied groups. The promise of barcoding will be realized only if based on solid taxonomic foundations.
Attitude Determination Error Analysis System (ADEAS) mathematical specifications document
Nicholson, Mark; Markley, F.; Seidewitz, E.
1988-01-01
The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.
Individual Differences and Rating Errors in First Impressions of Psychopathy
Christopher T. A. Gillen; Henriette Bergstrøm; Adelle E. Forth
2016-01-01
The current study is the first to investigate whether individual differences in personality are related to improved first impression accuracy when appraising psychopathy in female offenders from thin-slices of information. The study also investigated the types of errors laypeople make when forming these judgments. Sixty-seven undergraduates assessed 22 offenders on their level of psychopathy, violence, likability, and attractiveness. Psychopathy rating accuracy improved as rater extroversion-...
Individual Differences and Rating Errors in First Impressions of Psychopathy
Christopher T. A. Gillen; Henriette Bergstrøm; Forth, Adelle E.
2016-01-01
The current study is the first to investigate whether individual differences in personality are related to improved first impression accuracy when appraising psychopathy in female offenders from thin-slices of information. The study also investigated the types of errors laypeople make when forming these judgments. Sixty-seven undergraduates assessed 22 offenders on their level of psychopathy, violence, likability, and attractiveness. Psychopathy rating accuracy improved as rater extroversion-...
An Empirical State Error Covariance Matrix Orbit Determination Example
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance
CREME96 and Related Error Rate Prediction Methods
Adams, James H., Jr.
2012-01-01
Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and
Determination of diametral error using finite elements and experimental method
A. Karabulut
2010-01-01
Full Text Available This study concerns experimental and numerical analysis on a one-sided bound workpiece on the lathe machine. Cutting force creates deflection on workpiece while turning process is on. Deflection quantity is estimated utilizing Laser Distance Sensor (LDS with no contact achieved. Also diametral values are detected from different sides of workpiece after each turning operation. It is observed that diametral error differs due to the quantity of the deflection. Diametral error reached a peak where deflection reached a peak. Model which constituted finite elements is verified by experimental results. And also, facts which caused diametral error are determined.
Leung, Debbie; Matthews, William; Ozols, Maris; Roy, Aidan
2010-01-01
It is known that the number of different classical messages which can be communicated with a single use of a classical channel with zero probability of decoding error can sometimes be increased by using entanglement shared between sender and receiver. It has been an open question to determine whether entanglement can ever offer an advantage in terms of the zero-error communication rates achievable in the limit of many channel uses. In this paper we show, by explicit examples, that entanglement can indeed increase asymptotic zero-error capacity. Interestingly, in our examples the quantum protocols are based on the root systems of the exceptional Lie groups E7 and E8.
Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access
Zafar, Ammar
2012-12-29
In this paper, we present an optimal resource allocation scheme (ORA) for an all-participate(AP) cognitive relay network that minimizes the symbol error rate (SER). The SER is derived and different constraints are considered on the system. We consider the cases of both individual and global power constraints, individual constraints only and global constraints only. Numerical results show that the ORA scheme outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).
Error analysis on heading determination via genetic algorithms
Zhong Bing; Xu Jiangning; Ma Heng
2006-01-01
A new error analysis method is presented via genetic algorithms for high precise heading determination model based on two total positioning stations (TPSs). The method has the ability to search all possible solution space by the genetic operators of elitist model and restriction. The result of analyzing the error of this model shows that the accuracy of this model is precise enough to meet the need of calibration for navigation systems on ship, and the search space is only 0.03% of the total search space, and the precision of heading determination is 4" in a general dock.
Lee, P. J.
1984-01-01
For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.
Verification of precipitation in weather systems: determination of systematic errors
Ebert, E. E.; McBride, J. L.
2000-12-01
An object-oriented verification procedure is presented for gridded quantitative precipitation forecasts (QPFs). It is carried out within the framework of "contiguous rain areas" (CRAs), whereby a weather system is defined as a region bounded by a user-specified isopleth of precipitation in the union of the forecast and observed rain fields. The horizontal displacement of the forecast is determined by translating the forecast rain field until the total squared difference between the observed and forecast fields is minimized. This allows a decomposition of total error into components due to: (a) location; (b) rain volume and (c) pattern. Results are first presented for a Monte Carlo simulation of 40,000 synthetic CRAs in order to determine the accuracy of the verification procedure when the rain systems are only partially observed due to the presence of domain boundaries. Verification is then carried out for operational 24-h forecasts from the Australian Bureau of Meteorology LAPS numerical weather prediction model over a four-year period. Forty-five percent of all rain events were well forecast by the model, with small location and intensity errors. Location error was generally the dominant source of QPF error, with the directions of most frequent displacement varying by region. Forty-five percent of extreme rainfall events (>100 mm d -1) were well forecast, but in this case the model's underestimation of rain intensity was the most frequent source of error.
Measurements of Aperture Averaging on Bit-Error-Rate
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
A novel multitemporal insar model for joint estimation of deformation rates and orbital errors
Zhang, Lei
2014-06-01
Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.
Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis
Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl
2009-01-01
The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.
Evaluation of errors in quantitative determination of asbestos in rock
Baietto, Oliviero; Marini, Paola; Vitaliti, Martina
2016-04-01
The quantitative determination of the content of asbestos in rock matrices is a complex operation which is susceptible to important errors. The principal methodologies for the analysis are Scanning Electron Microscopy (SEM) and Phase Contrast Optical Microscopy (PCOM). Despite the PCOM resolution is inferior to that of SEM, PCOM analysis has several advantages, including more representativity of the analyzed sample, more effective recognition of chrysotile and a lower cost. The DIATI LAA internal methodology for the analysis in PCOM is based on a mild grinding of a rock sample, its subdivision in 5-6 grain size classes smaller than 2 mm and a subsequent microscopic analysis of a portion of each class. The PCOM is based on the optical properties of asbestos and of the liquids with note refractive index in which the particles in analysis are immersed. The error evaluation in the analysis of rock samples, contrary to the analysis of airborne filters, cannot be based on a statistical distribution. In fact for airborne filters a binomial distribution (Poisson), which theoretically defines the variation in the count of fibers resulting from the observation of analysis fields, chosen randomly on the filter, can be applied. The analysis in rock matrices instead cannot lean on any statistical distribution because the most important object of the analysis is the size of the of asbestiform fibers and bundles of fibers observed and the resulting relationship between the weights of the fibrous component compared to the one granular. The error evaluation generally provided by public and private institutions varies between 50 and 150 percent, but there are not, however, specific studies that discuss the origin of the error or that link it to the asbestos content. Our work aims to provide a reliable estimation of the error in relation to the applied methodologies and to the total content of asbestos, especially for the values close to the legal limits. The error assessments must
Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.
2017-01-01
Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.
Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors
Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)
2011-02-15
Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa
CLOSED-FORM ERROR RATES OF STBC SYSTEMS AND ITS PERFORMANCE ANALYSIS
Hu Xianbin; Gao Yuanyuan; Yi Xiaoxin
2006-01-01
The closed-form solutions for error rates of Space-Time Block Code (STBC) Multiple Phase Shift Keying (MPSK) systems are derived in this paper. With characteristic function based method and the partial integration based respectively, the exact expressions of error rates are obtained for (2,1) STBC with and without channel estimation error.Simulations show that the practical error rates accord with the theoretical ones, so closed-form error rates are accurate references for STBC performance evaluation. For the error of pilot assisted channel estimation, the performance of a (2,1)STBC system is deteriorated about 3dB.
Testing Theories of Transfer Using Error Rate Learning Curves.
Koedinger, Kenneth R; Yudelson, Michael V; Pavlik, Philip I
2016-07-01
We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions. Copyright © 2016 Cognitive Science Society, Inc.
Rates of computational errors for scoring the SIRS primary scales.
Tyner, Elizabeth A; Frederick, Richard I
2013-12-01
We entered item scores for the Structured Interview of Reported Symptoms (SIRS; Rogers, Bagby, & Dickens, 1991) into a spreadsheet and compared computed scores with those hand-tallied by examiners. We found that about 35% of the tests had at least 1 scoring error. Of SIRS scale scores tallied by examiners, about 8% were incorrectly summed. When the errors were corrected, only 1 SIRS classification was reclassified in the fourfold scheme used by the SIRS. We note that mistallied scores on psychological tests are common, and we review some strategies for reducing scale score errors on the SIRS. (c) 2013 APA, all rights reserved.
Potentially Significant Source of Error in Magnetic Paleolatitude Determinations
Herndon, J Marvin
2011-01-01
The discovery of close-to-star gas-giant exo-planets lends support to the idea of Earth's origin as a Jupiter-like gas giant and to the consequences of its compression, including whole-Earth decompression dynamics that gives rise, without requiring mantle convection, to the myriad measurements and observations whose descriptions are attributed to plate tectonics. I show here that paleolatitude determinations, used extensively in Pangaea-like reconstructions and in paleoclimate considerations, may be subject to potentially significant errors if rock-magnetization was acquired at Earth-radii less than present.
Measurement Error Effects of Beam Parameters Determined by Beam Profiles
Jang, Ji-Ho; Jeon, Dong-O
2015-01-01
A conventional method to determine beam parameters is using the profile measurements and converting them into the values of twiss parameters and beam emittance at a specified position. The beam information can be used to improve transverse beam matching between two different beam lines or accelerating structures. This work is related with the measurement error effects of the beam parameters and the optimal number of profile monitors in a section between MEBT (medium energy beam transport) and QWR (quarter wave resonator) of RAON linear accelerator.
Tissue pattern recognition error rates and tumor heterogeneity in gastric cancer.
Potts, Steven J; Huff, Sarah E; Lange, Holger; Zakharov, Vladislav; Eberhard, David A; Krueger, Joseph S; Hicks, David G; Young, George David; Johnson, Trevor; Whitney-Miller, Christa L
2013-01-01
The anatomic pathology discipline is slowly moving toward a digital workflow, where pathologists will evaluate whole-slide images on a computer monitor rather than glass slides through a microscope. One of the driving factors in this workflow is computer-assisted scoring, which depends on appropriate selection of regions of interest. With advances in tissue pattern recognition techniques, a more precise region of the tissue can be evaluated, no longer bound by the pathologist's patience in manually outlining target tissue areas. Pathologists use entire tissues from which to determine a score in a region of interest when making manual immunohistochemistry assessments. Tissue pattern recognition theoretically offers this same advantage; however, error rates exist in any tissue pattern recognition program, and these error rates contribute to errors in the overall score. To provide a real-world example of tissue pattern recognition, 11 HER2-stained upper gastrointestinal malignancies with high heterogeneity were evaluated. HER2 scoring of gastric cancer was chosen due to its increasing importance in gastrointestinal disease. A method is introduced for quantifying the error rates of tissue pattern recognition. The trade-off between fully sampling tumor with a given tissue pattern recognition error rate versus randomly sampling a limited number of fields of view with higher target accuracy was modeled with a Monte-Carlo simulation. Under most scenarios, stereological methods of sampling-limited fields of view outperformed whole-slide tissue pattern recognition approaches for accurate immunohistochemistry analysis. The importance of educating pathologists in the use of statistical sampling is discussed, along with the emerging role of hybrid whole-tissue imaging and stereological approaches.
Accurate and fast methods to estimate the population mutation rate from error prone sequences
Miyamoto Michael M
2009-08-01
Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.
Simultaneous control of error rates in fMRI data analysis.
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-12-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain.
Cyclopentolate as a cycloplegic drug in determination of refractive error
Bolinovska Sofija
2008-01-01
Full Text Available Cycloplegia is loss of the power of accommodation with inhibition of a ciliary muscle. We obtain in this way the smallest refraction of the lens and make it possible to determine the presence and size of the particular refractive error in cycloplegia by using cyclopentolate. Cyclopentolate is a synthetic anticholinergic drug and antagonist of the muscarine receptors. If applied in the eye, it blocks the effect of cholinergic stimulation on the sphincter pupillae muscle and ciliary muscle. It provokes severe mydriasis (dilation of the pupil and cycloplegia (paralysis of the accommodation. Cyclopentolate has been used occasionaly in diagnostic purposes: defining ocular refraction and in ophthalmoscopy. This is the prospective study which included 200 children (400 eyes aged 3-18 years, carried out in one ambulatory ophthalmological examination. The results were analysed using standard statistical methods. The most often refractive error in the examined group of children is hyperopia with hyperopic astigmatism, then myopia with myopic astigmatism and mixtus astigmatism are the most often in the oldest group of children. The mean value of corneal astigmatism on the right eye was 1.24 D, on the left eye 1.23 D. Anisometropy was found in 40% children. The presence of myopia, myopic and astigmatism mixtus tended to increase, and hyperopia and hyperopic astigmatism tended to decrease toward older groups of children. Refractive error could result in a poor development of visual acuity, causing amblyopia and strabismus, and because of that represents an important public health problem. As one of amblyogenic risk factors in children, it can be prevented with screening program and appropriate treatment, thus providing prevention of amblyopia as one form of blindness.
A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.
Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema
2016-01-01
A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.
A long lifetime, low error rate RRAM design with self-repair module
Zhiqiang, You; Fei, Hu; Liming, Huang; Peng, Liu; Jishun, Kuang; Shiying, Li
2016-11-01
Resistive random access memory (RRAM) is one of the promising candidates for future universal memory. However, it suffers from serious error rate and endurance problems. Therefore, exploring a technical solution is greatly demanded to enhance endurance and reduce error rate. In this paper, we propose a reliable RRAM architecture that includes two reliability modules: error correction code (ECC) and self-repair modules. The ECC module is used to detect errors and decrease error rate. The self-repair module, which is proposed for the first time for RRAM, can get the information of error bits and repair wear-out cells by a repair voltage. Simulation results show that the proposed architecture can achieve lowest error rate and longest lifetime compared to previous reliable designs. Project supported by the New Century Excellent Talents in University (No. NCET-12-0165) and the National Natural Science Foundation of China (Nos. 61472123, 61272396).
Error Rates in Users of Automatic Face Recognition Software.
White, David; Dunn, James D; Schmid, Alexandra C; Kemp, Richard I
2015-01-01
In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated 'candidate lists' selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers-who use the system in their daily work-and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced "facial examiners" outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems-potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.
Asymptotic correctability of Bell-diagonal quantum states and maximum tolerable bit error rates
Ranade, K S; Ranade, Kedar S.; Alber, Gernot
2005-01-01
The general conditions are discussed which quantum state purification protocols have to fulfill in order to be capable of purifying Bell-diagonal qubit-pair states, provided they consist of steps that map Bell-diagonal states to Bell-diagonal states and they finally apply a suitably chosen Calderbank-Shor-Steane code to the outcome of such steps. As a main result a necessary and a sufficient condition on asymptotic correctability are presented, which relate this problem to the magnitude of a characteristic exponent governing the relation between bit and phase errors under the purification steps. These conditions allow a straightforward determination of maximum tolerable bit error rates of quantum key distribution protocols whose security analysis can be reduced to the purification of Bell-diagonal states.
Hard Data on Soft Errors: A Large-Scale Assessment of Real-World Error Rates in GPGPU
Haque, Imran S
2009-01-01
Graphics processing units (GPUs) are gaining widespread use in computational chemistry and other scientific simulation contexts because of their huge performance advantages relative to conventional CPUs. However, the reliability of GPUs in error-intolerant applications is largely unproven. In particular, a lack of error checking and correcting (ECC) capability in the memory subsystems of graphics cards has been cited as a hindrance to the acceptance of GPUs as high-performance coprocessors, but the impact of this design has not been previously quantified. In this article we present MemtestG80, our software for assessing memory error rates on NVIDIA G80 and GT200-architecture-based graphics cards. Furthermore, we present the results of a large-scale assessment of GPU error rate, conducted by running MemtestG80 on over 20,000 hosts on the Folding@home distributed computing network. Our control experiments on consumer-grade and dedicated-GPGPU hardware in a controlled environment found no errors. However, our su...
Alheadary, Wael G.
2016-12-24
In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.
The Detection of Structural Deformation Errors in Attitude Determination
M. J. Moore; C. Rizos; J. Wang
2003-01-01
In the determination of the attitude parameters from a multi-antenna GPS array, one of the major assumptions is that the body frame is rigid at all times. If this assumption is not true then the derived attitude parameters will be in error. It is well known that in airborne platforms the wings often experience some displacement during flight, especially during periods of initializing maneouvres, such as taking off, landing,and banking. Often it is at these points in time that it is most critical to have the most precise attitude parameters.There are a number of techniques available for the detection of modeling errors.The CUSUM algorithm has successfully been implemented in the past to detect small persistent changes. In this paper the authors investigate different methods of generating the residuals, to be tested by the CUSUM algorithm, in an effort to determine which technique is best suited for the detection of structural deformation of an airborne platform. The methods investigated include monitoring the mean of the residuals generated from the difference between the known body frame coordinates, and those calculated from the derived attitude parameters. The generated residuals are then passed to a CUSUM algorithm to detect any small persistent changes. An alternative method involves transforming the generated residuals into the frequency domain through the use of the Fast Fourier Transform. The CUSUM algorithm is then used to detect any frequency changes. The final technique investigated involves transforming the generated residuals using the Haar wavelet. The wavelet coefficients are then monitored by the CUSUM algorithm in order to detect any significant change to the rigidity of the body frame.Detecting structural deformation, and quantifying the degree of deformation, during flight will ensure that these effects can be removed from the system, thus ensuring the most precise and reliable attitude parameter solutions. This paper, through a series
Beneficial Effects of Population Bottlenecks in an RNA Virus Evolving at Increased Error Rate
Cases-González, Clara E.; Arribas, María; Domingo, Esteban; Lázaro, Ester
2008-01-01
RNA viruses replicate their genomes with a very high error rate and constitute highly heterogeneous mutant distributions similar to the molecular quasispecies introduced to explain the evolution of prebiotic replicators. The genetic information included in a quasispecies can only be faithfully transmitted below a critical error rate. When the error threshold is crossed, the population structure disorganizes, and it is substituted by a randomly distributed mutant spectrum. For viral quasispeci...
Conserved rates and patterns of transcription errors across bacterial growth states and lifestyles
Traverse, Charles C.; Ochman, Howard
2016-01-01
Errors that occur during transcription have received much less attention than the mutations that occur in DNA because transcription errors are not heritable and usually result in a very limited number of altered proteins. However, transcription error rates are typically several orders of magnitude higher than the mutation rate. Also, individual transcripts can be translated multiple times, so a single error can have substantial effects on the pool of proteins. Transcription errors can also contribute to cellular noise, thereby influencing cell survival under stressful conditions, such as starvation or antibiotic stress. Implementing a method that captures transcription errors genome-wide, we measured the rates and spectra of transcription errors in Escherichia coli and in endosymbionts for which mutation and/or substitution rates are greatly elevated over those of E. coli. Under all tested conditions, across all species, and even for different categories of RNA sequences (mRNA and rRNAs), there were no significant differences in rates of transcription errors, which ranged from 2.3 × 10−5 per nucleotide in mRNA of the endosymbiont Buchnera aphidicola to 5.2 × 10−5 per nucleotide in rRNA of the endosymbiont Carsonella ruddii. The similarity of transcription error rates in these bacterial endosymbionts to that in E. coli (4.63 × 10−5 per nucleotide) is all the more surprising given that genomic erosion has resulted in the loss of transcription fidelity factors in both Buchnera and Carsonella. PMID:26884158
Conserved rates and patterns of transcription errors across bacterial growth states and lifestyles.
Traverse, Charles C; Ochman, Howard
2016-03-22
Errors that occur during transcription have received much less attention than the mutations that occur in DNA because transcription errors are not heritable and usually result in a very limited number of altered proteins. However, transcription error rates are typically several orders of magnitude higher than the mutation rate. Also, individual transcripts can be translated multiple times, so a single error can have substantial effects on the pool of proteins. Transcription errors can also contribute to cellular noise, thereby influencing cell survival under stressful conditions, such as starvation or antibiotic stress. Implementing a method that captures transcription errors genome-wide, we measured the rates and spectra of transcription errors in Escherichia coli and in endosymbionts for which mutation and/or substitution rates are greatly elevated over those of E. coli Under all tested conditions, across all species, and even for different categories of RNA sequences (mRNA and rRNAs), there were no significant differences in rates of transcription errors, which ranged from 2.3 × 10(-5) per nucleotide in mRNA of the endosymbiont Buchnera aphidicola to 5.2 × 10(-5) per nucleotide in rRNA of the endosymbiont Carsonella ruddii The similarity of transcription error rates in these bacterial endosymbionts to that in E. coli (4.63 × 10(-5) per nucleotide) is all the more surprising given that genomic erosion has resulted in the loss of transcription fidelity factors in both Buchnera and Carsonella.
Mutual information, bit error rate and security in W\\'{o}jcik's scheme
Zhang, Z
2004-01-01
In this paper the correct calculations of the mutual information of the whole transmission, the quantum bit error rate (QBER) are presented. Mistakes of the general conclusions relative to the mutual information, the quantum bit error rate (QBER) and the security in W\\'{o}jcik's paper [Phys. Rev. Lett. {\\bf 90}, 157901(2003)] have been pointed out and corrected.
National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?
Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.
2010-01-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…
Birjandi, Parviz; Siyyari, Masood
2016-01-01
This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…
National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?
Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.
2010-01-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…
Error baseline rates of five sample preparation methods used to characterize RNA virus populations
Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.
2017-01-01
Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717
Influenza infection rates, measurement errors and the interpretation of paired serology.
Simon Cauchemez
Full Text Available Serological studies are the gold standard method to estimate influenza infection attack rates (ARs in human populations. In a common protocol, blood samples are collected before and after the epidemic in a cohort of individuals; and a rise in haemagglutination-inhibition (HI antibody titers during the epidemic is considered as a marker of infection. Because of inherent measurement errors, a 2-fold rise is usually considered as insufficient evidence for infection and seroconversion is therefore typically defined as a 4-fold rise or more. Here, we revisit this widely accepted 70-year old criterion. We develop a Markov chain Monte Carlo data augmentation model to quantify measurement errors and reconstruct the distribution of latent true serological status in a Vietnamese 3-year serological cohort, in which replicate measurements were available. We estimate that the 1-sided probability of a 2-fold error is 9.3% (95% Credible Interval, CI: 3.3%, 17.6% when antibody titer is below 10 but is 20.2% (95% CI: 15.9%, 24.0% otherwise. After correction for measurement errors, we find that the proportion of individuals with 2-fold rises in antibody titers was too large to be explained by measurement errors alone. Estimates of ARs vary greatly depending on whether those individuals are included in the definition of the infected population. A simulation study shows that our method is unbiased. The 4-fold rise case definition is relevant when aiming at a specific diagnostic for individual cases, but the justification is less obvious when the objective is to estimate ARs. In particular, it may lead to large underestimates of ARs. Determining which biological phenomenon contributes most to 2-fold rises in antibody titers is essential to assess bias with the traditional case definition and offer improved estimates of influenza ARs.
Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt
2015-12-01
Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data.
Study of bit error rate (BER) for multicarrier OFDM
Alshammari, Ahmed; Albdran, Saleh; Matin, Mohammad
2012-10-01
Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technique that is being used more and more in recent wideband digital communications. It is known for its ability to handle severe channel conditions, the efficiency of spectral usage and the high data rate. Therefore, It has been used in many wired and wireless communication systems such as DSL, wireless networks and 4G mobile communications. Data streams are modulated and sent over multiple subcarriers using either M-QAM or M-PSK. OFDM has lower inter simple interference (ISI) levels because of the of the low data rates of carriers resulting in long symbol periods. In this paper, BER performance of OFDM with respect to signal to noise ratio (SNR) is evaluated. BPSK Modulation is used in s Simulation based system in order to get the BER over different wireless channels. These channels include additive white Gaussian Noise (AWGN) and fading channels that are based on Doppler spread and Delay spread. Plots of the results are compared with each other after varying some of the key parameters of the system such as the IFFT, number of carriers, SNR. The results of the simulation give visualization of what kind of BER to expect when the signal goes through those channels.
Multipath error in range rate measurement by PLL-transponder/GRARR/TDRS
Sohn, S. J.
1970-01-01
Range rate errors due to specular and diffuse multipath are calculated for a tracking and data relay satellite (TDRS) using an S band Goddard range and range rate (GRARR) system modified with a phase-locked loop transponder. Carrier signal processing in the coherent turn-around transponder and the GRARR reciever is taken into account. The root-mean-square (rms) range rate error was computed for the GRARR Doppler extractor and N-cycle count range rate measurement. Curves of worst-case range rate error are presented as a function of grazing angle at the reflection point. At very low grazing angles specular scattering predominates over diffuse scattering as expected, whereas for grazing angles greater than approximately 15 deg, the diffuse multipath predominates. The range rate errors at different low orbit altutudes peaked between 5 and 10 deg grazing angles.
Symbol Error Rate of MPSK over EGK Channels Perturbed by a Dominant Additive Laplacian Noise
Souri, Hamza
2015-06-01
The Laplacian noise has received much attention during the recent years since it affects many communication systems. We consider in this paper the probability of error of an M-ary phase shift keying (PSK) constellation operating over a generalized fading channel in presence of a dominant additive Laplacian noise. In this context, the decision regions of the receiver are determined using the maximum likelihood and the minimum distance detectors. Once the decision regions are extracted, the resulting symbol error rate expressions are computed and averaged over an Extended Generalized-K fading distribution. Generic closed form expressions of the conditional and the average probability of error are obtained in terms of the Fox’s H function. Simplifications for some special cases of fading are presented and the resulting formulas end up being often expressed in terms of well known elementary functions. Finally, the mathematical formalism is validated using some selected analytical-based numerical results as well as Monte- Carlo simulation-based results.
Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E
2013-12-01
In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.
Orbit Determination Error Analysis Results for the Triana Sun-Earth L2 Libration Point Mission
Marr, G.
2003-01-01
Using the NASA Goddard Space Flight Center's Orbit Determination Error Analysis System (ODEAS), orbit determination error analysis results are presented for all phases of the Triana Sun-Earth L1 libration point mission and for the science data collection phase of a future Sun-Earth L2 libration point mission. The Triana spacecraft was nominally to be released by the Space Shuttle in a low Earth orbit, and this analysis focuses on that scenario. From the release orbit a transfer trajectory insertion (TTI) maneuver performed using a solid stage would increase the velocity be approximately 3.1 km/sec sending Triana on a direct trajectory to its mission orbit. The Triana mission orbit is a Sun-Earth L1 Lissajous orbit with a Sun-Earth-vehicle (SEV) angle between 4.0 and 15.0 degrees, which would be achieved after a Lissajous orbit insertion (LOI) maneuver at approximately launch plus 6 months. Because Triana was to be launched by the Space Shuttle, TTI could potentially occur over a 16 orbit range from low Earth orbit. This analysis was performed assuming TTI was performed from a low Earth orbit with an inclination of 28.5 degrees and assuming support from a combination of three Deep Space Network (DSN) stations, Goldstone, Canberra, and Madrid and four commercial Universal Space Network (USN) stations, Alaska, Hawaii, Perth, and Santiago. These ground stations would provide coherent two-way range and range rate tracking data usable for orbit determination. Larger range and range rate errors were assumed for the USN stations. Nominally, DSN support would end at TTI+144 hours assuming there were no USN problems. Post-TTI coverage for a range of TTI longitudes for a given nominal trajectory case were analyzed. The orbit determination error analysis after the first correction maneuver would be generally applicable to any libration point mission utilizing a direct trajectory.
Nickerson, Naomi H; Li, Ying; Benjamin, Simon C
2013-01-01
A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.
Error rates in forensic DNA analysis: Definition, numbers, impact and communication
Kloosterman, A.; Sjerps, M.; Quak, A.
2014-01-01
Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and pub
Data-driven region-of-interest selection without inflating Type I error rate.
Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard
2017-01-01
In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies.
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
Graphical algorithms and threshold error rates for the 2d colour code
Wang, D S; Hill, C D; Hollenberg, L C L
2009-01-01
Recent work on fault-tolerant quantum computation making use of topological error correction shows great potential, with the 2d surface code possessing a threshold error rate approaching 1% (NJoP 9:199, 2007), (arXiv:0905.0531). However, the 2d surface code requires the use of a complex state distillation procedure to achieve universal quantum computation. The colour code of (PRL 97:180501, 2006) is a related scheme partially solving the problem, providing a means to perform all Clifford group gates transversally. We review the colour code and its error correcting methodology, discussing one approximate technique based on graph matching. We derive an analytic lower bound to the threshold error rate of 6.25% under error-free syndrome extraction, while numerical simulations indicate it may be as high as 13.3%. Inclusion of faulty syndrome extraction circuits drops the threshold to approximately 0.1%.
Error Modeling and Analysis for InSAR Spatial Baseline Determination of Satellite Formation Flying
Jia Tu
2012-01-01
Full Text Available Spatial baseline determination is a key technology for interferometric synthetic aperture radar (InSAR missions. Based on the intersatellite baseline measurement using dual-frequency GPS, errors induced by InSAR spatial baseline measurement are studied in detail. The classifications and characters of errors are analyzed, and models for errors are set up. The simulations of single factor and total error sources are selected to evaluate the impacts of errors on spatial baseline measurement. Single factor simulations are used to analyze the impact of the error of a single type, while total error sources simulations are used to analyze the impacts of error sources induced by GPS measurement, baseline transformation, and the entire spatial baseline measurement, respectively. Simulation results show that errors related to GPS measurement are the main error sources for the spatial baseline determination, and carrier phase noise of GPS observation and fixing error of GPS receiver antenna are main factors of errors related to GPS measurement. In addition, according to the error values listed in this paper, 1 mm level InSAR spatial baseline determination should be realized.
Massey, J. L.
1976-01-01
The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.
Influence of the FEC Channel Coding on Error Rates and Picture Quality in DVB Baseband Transmission
T. Kratochvil
2006-09-01
Full Text Available The paper deals with the component analysis of DTV (Digital Television and DVB (Digital Video Broadcasting baseband channel coding. Used FEC (Forward Error Correction error-protection codes principles are shortly outlined and the simulation model applied in Matlab is presented. Results of achieved bit and symbol error rates and corresponding picture quality evaluation analysis are presented, including the evaluation of influence of the channel coding on transmitted RGB images and their noise rates related to MOS (Mean Opinion Score. Conclusion of the paper contains comparison of DVB channel codes efficiency.
Conjunction error rates on a continuous recognition memory test: little evidence for recollection.
Jones, Todd C; Atchley, Paul
2002-03-01
Two experiments examined conjunction memory errors on a continuous recognition task where the lag between parent words (e.g., blackmail, jailbird) and later conjunction lures (blackbird) was manipulated. In Experiment 1, contrary to expectations, the conjunction error rate was highest at the shortest lag (1 word) and decreased as the lag increased. In Experiment 2 the conjunction error rate increased significantly from a 0- to a 1-word lag, then decreased slightly from a 1- to a 5-word lag. The results provide mixed support for simple familiarity and dual-process accounts of recognition. Paradoxically, searching for an item in memory does not appear to be a good encoding task.
Determining the Errors in Output Kinematic Parameters of Planar Mechanisms with a Complex Structure
Trzaska W.
2014-11-01
Full Text Available The study is focused on determining the errors in output kinematic parameters (position, velocity, acceleration, jerk of entire links or their selected points in complex planar mechanisms. The number of DOFs of the kinematic system is assumed to be equal to the number of drives and the rigid links are assumed to be connected by ideal, clearance-free geometric constraints. Input data include basic parameters of the mechanism with the involved errors as well as kinematic parameters of driving links and the involved errors. Output errors in kinematic parameters are determined basing on the linear theory of errors.
Bit Error Rate Analysis for MC-CDMA Systems in Nakagami- Fading Channels
Li Zexian
2004-01-01
Full Text Available Multicarrier code division multiple access (MC-CDMA is a promising technique that combines orthogonal frequency division multiplexing (OFDM with CDMA. In this paper, based on an alternative expression for the -function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER of multiuser MC-CDMA systems in frequency-selective Nakagami- fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC or equal gain combining (EGC. The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.
Foreign Exchange Rate Futures Trends: Foreign Exchange Risk or Systematic Forecasting Errors?
Marcelo Cunha Medeiros
2006-12-01
Full Text Available The forward exchange rate is widely used in international finance whenever the analysis of the expected depreciation is needed. It is also used to identify currency risk premium. The difference between the spot rate and the forward rate is supposed to be a predictor of the future movements of the spot rate. This prediction is hardly precise. The fact that the forward rate is a biased predictor of the future change in the spot rate can be attributed to a currency risk premium. The bias can also be attributed to systematic errors of the future depreciation of the currency. This paper analyzes the nature of the risk premium and of the prediction errors in using the forward rate. It will look into the efficiency and rationality of the futures market in Brazil from April 1995 to December 1998, a period of controled exchange rates.
The effect of sampling on estimates of lexical specificity and error rates.
Rowland, Caroline F; Fletcher, Sarah L
2006-11-01
Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.
Zollanvari, Amin
2013-05-24
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Zollanvari, Amin; Genton, Marc G
2013-08-01
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
On zero-rate error exponent for BSC with noisy feedback
Burnashev, Marat V
2008-01-01
For the information transmission a binary symmetric channel is used. There is also another noisy binary symmetric channel (feedback channel), and the transmitter observes without delay all the outputs of the forward channel via that feedback channel. The transmission of a nonexponential number of messages (i.e. the transmission rate equals zero) is considered. The achievable decoding error exponent for such a combination of channels is investigated. It is shown that if the crossover probability of the feedback channel is less than a certain positive value, then the achievable error exponent is better than the similar error exponent of the no-feedback channel. The transmission method described and the corresponding lower bound for the error exponent can be strengthened, and also extended to the positive transmission rates.
Difference of soft error rates in SOI SRAM induced by various high energy ion species
Abo, Satoshi, E-mail: abo@cqst.osaka-u.ac.jp [Center for Quantum Science and Technology Under Extreme Conditions, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531 (Japan); Masuda, Naoyuki; Wakaya, Fujio; Lohner, Tivadar [Center for Quantum Science and Technology Under Extreme Conditions, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531 (Japan); Onoda, Shinobu; Makino, Takahiro; Hirao, Toshio; Ohshima, Takeshi [Semiconductor Analysis and Radiation Effects Group, Environment and Industrial Materials Research Division, Quantum Beam Science Directorate, Japan Atomic Energy Agency, 1233 Watanuki-machi, Takasaki, Gunma 370-1292 (Japan); Iwamatsu, Toshiaki; Oda, Hidekazu [Advanced Device Technology Department, Production and Technology Unit, Devices and Analysis Technology Division, Renesas Electronics Corporation, 751, Horiguchi, Hitachinaka, Ibaraki 312-8504 (Japan); Takai, Mikio [Center for Quantum Science and Technology Under Extreme Conditions, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531 (Japan)
2012-02-15
Soft error rates in silicon-on-insulator (SOI) static random access memories (SRAMs) with a technology node of 90 nm have been investigated by beryllium and carbon ion probes. The soft error rates induced by beryllium and carbon probes started to increase with probe energies of 5.0 and 8.5 MeV, in which probes slightly penetrated the over-layer, and were saturated with energies at and above 7.0 and 9.0 MeV, in which the generated charge in the SOI body was more than the critical charge. The soft error rates in the SOI SRAMs by various ion probes were also compared with the generated charge in the SOI body. The soft error rates induced by hydrogen and helium ion probes were 1-2 orders of magnitude lower than those by beryllium, carbon and oxygen ion probes. The soft error rates depend not only on the generated charge in the SOI body but also on the incident ion species.
2011-01-26
... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 52 RIN 2060-AQ66 Determinations Concerning Need for Error Correction, Partial Approval... Determination Concerning the Need for Error Correction, Partial Approval and Partial Disapproval, and...
de la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Núñez-Antón, Vicente
2016-07-08
Consider longitudinal observations across different subjects such that the underlying distribution is determined by a non-linear mixed-effects model. In this context, we look at the misclassification error rate for allocating future subjects using cross-validation, bootstrap algorithms (parametric bootstrap, leave-one-out, .632 and [Formula: see text]), and bootstrap cross-validation (which combines the first two approaches), and conduct a numerical study to compare the performance of the different methods. The simulation and comparisons in this study are motivated by real observations from a pregnancy study in which one of the main objectives is to predict normal versus abnormal pregnancy outcomes based on information gathered at early stages. Since in this type of studies it is not uncommon to have insufficient data to simultaneously solve the classification problem and estimate the misclassification error rate, we put special attention to situations when only a small sample size is available. We discuss how the misclassification error rate estimates may be affected by the sample size in terms of variability and bias, and examine conditions under which the misclassification error rate estimates perform reasonably well.
The determinants of exchange rate in Croatia
Manuel BENAZIC
2016-06-01
Full Text Available The dilemma for every country with an independent monetary policy is which kind of exchange rate arrangement should be applied. Through the exchange rate policy, countries can influence their economies, i.e. price stability and export competiveness. Croatia is a new EU member state, it has its own monetary policy and currency but it is on the way to euro introduction. Regarding the experiences from the beginning of the 1990s when Croatia was faced with serious monetary instabilities and hyperinflation, the goal of Croatian National Bank (CNB is to ensure price stability and one way to do so is through exchange rate policy. Croatia, as a small and open economy, has applied a managed floating exchange rate regime. The exchange rate is determined primarily by the foreign exchange supply and demand on the foreign exchange market, with occasional market interventions by the CNB. Therefore, in order to maintain exchange rate stability, policymakers must be able to recognize how changes in these factors affect changes in the exchange rate. This research aims to find a relationship among the main sources of foreign currency inflow and outflow and the level of exchange rate in Croatia. The analysis is carried out by using the bounds testing (ARDL approach for co-integration. The results indicate the existence of a stable co-integration relationship between the observed variables, whereby an increase in the majority of variables leads to an exchange rate appreciation.
High speed and adaptable error correction for megabit/s rate quantum key distribution.
Dixon, A R; Sato, H
2014-12-02
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.
Block Recovery Rate-Based Unequal Error Protection for Three-Screen TV
Hojin Ha
2017-02-01
Full Text Available This paper describes a three-screen television system using a block recovery rate (BRR-based unequal error protection (UEP. The proposed in-home wireless network uses scalable video coding (SVC and UEP with forward error correction (FEC for maximizing the quality of service (QoS over error-prone wireless networks. For efficient FEC packet assignment, this paper proposes a simple and efficient performance metric, a BRR which is defined as a recovery rate of temporal and quality layer from FEC assignment by analyzing the hierarchical prediction structure including the current packet loss. It also explains the SVC layer switching scheme according to network conditions such as packet loss rate (PLR and available bandwidth (ABW. In the experiments conducted, gains in video quality with the proposed UEP scheme vary from 1 to 3 dB in Y-peak signal-to-noise ratio (PSNR with corresponding subjective video quality improvements.
Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J
2014-01-01
Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural 'mutation', can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such 'time constraints' affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D 'target handaxe form' using a standardized foam block and a plastic knife. Three distinct 'time conditions' were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific 'threshold' might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a 'threshold' effect, below which mutation rates increase more markedly. Our results also suggest that 'time budgets' available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, 'time-budgeting' factors need to be given greater consideration in evolutionary models of material culture change.
Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam [Dept. of Nuclear Medicine, Severance Hospital, Yonsei University, Seoul (Korea, Republic of); Park, Hoon Hee [Dept. of Radiological Technology, Shingu college, Sungnam (Korea, Republic of)
2013-12-15
This study is aimed to evaluate the effect of T{sub 1/2} upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9{sup 9m}TcO{sub 4}- of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ{sup 2} test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T{sub 1/2} error from change of gradient with -0.25% to +0.25%, if T{sub 1/2} is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T{sub 1/2} error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation
Estimation of the minimum mRNA splicing error rate in vertebrates.
Skandalis, A
2016-01-01
The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons.
Analytical expression for the bit error rate of cascaded all-optical regenerators
Mørk, Jesper; Öhman, Filip; Bischoff, S.
2003-01-01
We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed.......We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed....
Error Analysis on Determination of Specific Heat of Agricultural Products Using Mixture Method
Ropiudin
2006-04-01
Full Text Available This paper reporting method of error analysis in determination of specific heat of agricultural product using mixture method. There are six Variables evaluated on error measurement, those are: mass of water, mass of simple, equilibrium temparature, water temparature, sample temparature and calorimeter temperature. As the results of experiment on potatoes and carrot, calorimeter temperature gave biggest error contribution based on simulation conducted using Visual Basic Application (VBA at Microsoft Word.
Zbigniew Staroszczyk
2014-12-01
Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors
Error rates in forensic DNA analysis: definition, numbers, impact and communication.
Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid
2014-09-01
Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed
Berhane Yemane
2008-03-01
Full Text Available Abstract Background As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs. Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. Methods This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. Results The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. Conclusion The low sensitivity of parameter
Error compensation of single-antenna attitude determination using GNSS for Low-dynamic applications
Chen, Wen; Yu, Chao; Cai, Miaomiao
2017-04-01
GNSS-based single-antenna pseudo-attitude determination method has attracted more and more attention from the field of high-dynamic navigation due to its low cost, low system complexity, and no temporal accumulated errors. Related researches indicate that this method can be an important complement or even an alternative to the traditional sensors for general accuracy requirement (such as small UAV navigation). The application of single-antenna attitude determining method to low-dynamic carrier has just started. Different from the traditional multi-antenna attitude measurement technique, the pseudo-attitude attitude determination method calculates the rotation angle of the carrier trajectory relative to the earth. Thus it inevitably contains some deviations comparing with the real attitude angle. In low-dynamic application, these deviations are particularly noticeable, which may not be ignored. The causes of the deviations can be roughly classified into three categories, including the measurement error, the offset error, and the lateral error. Empirical correction strategies for the formal two errors have been promoted in previous study, but lack of theoretical support. In this paper, we will provide quantitative description of the three type of errors and discuss the related error compensation methods. Vehicle and shipborne experiments were carried out to verify the feasibility of the proposed correction methods. Keywords: Error compensation; Single-antenna; GNSS; Attitude determination; Low-dynamic
Frequency and determinants of drug administration errors in the intensive care unit
van den Bemt, PMLA; Fijn, R; van der Voort, PHJ; Gossen, AA; Egberts, TCG; Brouwers, JRBJ
2002-01-01
Objective., The study aimed to identify both the frequency and the determinants of drug administration errors in the intensive care unit. Design: Administration errors were detected by using the disguised-observation technique (observation of medication administrations by nurses, without revealing t
Cold Vacuum Drying (CVD) OCRWM Loop Error Determination
PHILIPP, B.L.
2000-07-26
Characterization is specifically identified by the Richland Operations Office (RL) for the Office of Civilian Radioactive Waste Management (OCRWM) of the US Department of Energy (DOE), as requiring application of the requirements in the Quality Assurance Requirements and Description (QARD) (RW-0333P DOE 1997a). Those analyses that provide information that is necessary for repository acceptance require application of the QARD. The cold vacuum drying (CVD) project identified the loops that measure, display, and record multi-canister overpack (MCO) vacuum pressure and Tempered Water (TW) temperature data as providing OCRWM data per Application of the Office of Civilian Radioactive Waste Management (OCRWM) Quality Assurance Requirements to the Hanford Spent Nuclear Fuel Project HNF-SD-SNF-RPT-007. Vacuum pressure transmitters (PT 1*08, 1*10) and TW temperature transmitters (TIT-3*05, 3*12) are used to verify drying and to determine the water content within the MCO after CVD.
Analytical expression for the bit error rate of cascaded all-optical regenerators
Mørk, Jesper; Öhman, Filip; Bischoff, S.
2003-01-01
We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed....
Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah
2016-01-01
Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in non-model organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown
Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies
Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.
2010-01-01
We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.
Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding
Haider M. AlSabbagh
2012-03-01
Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.
A minimum bit error-rate detector for amplify and forward relaying systems
Ahmed, Qasim Zeeshan
2012-05-01
In this paper, a new detector is being proposed for amplify-and-forward (AF) relaying system when communicating with the assistance of L number of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the system. The complexity of the system is further reduced by implementing this detector adaptively. The proposed detector is free from channel estimation. Our results demonstrate that the proposed detector is capable of achieving a gain of more than 1-dB at a BER of 10 -5 as compared to the conventional minimum mean square error detector when communicating over a correlated Rayleigh fading channel. © 2012 IEEE.
Ahmed, Qasim Zeeshan
2014-04-01
The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol. © 2002-2012 IEEE.
Bányai, László; Patthy, László
2016-08-01
A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation.
Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.
Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda
2017-09-01
We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.
Wright, Timothy J; Boot, Walter R; Morgan, Chelsea S
2013-09-01
Research on inattentional blindness (IB) has uncovered few individual difference measures that predict failures to detect an unexpected event. Notably, no clear relationship exists between primary task performance and IB. This is perplexing as better task performance is typically associated with increased effort and should result in fewer spare resources to process the unexpected event. We utilized a psychophysiological measure of effort (pupillary response) to explore whether differences in effort devoted to the primary task (multiple object tracking) are related to IB. Pupillary response was sensitive to tracking load and differences in primary task error rates. Furthermore, pupillary response was a better predictor of conscientiousness than primary task errors; errors were uncorrelated with conscientiousness. Despite being sensitive to task load, individual differences in performance and conscientiousness, pupillary response did not distinguish between those who noticed the unexpected event and those who did not. Results provide converging evidence that effort and primary task engagement may be unrelated to IB.
Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor
2016-01-01
When an Unmanned Aircraft System (UAS) encounters an intruder and is unable to maintain required temporal and spatial separation between the two vehicles, it is referred to as a loss of well-clear. In this state, the UAS must make its best attempt to regain separation while maximizing the minimum separation between itself and the intruder. When encountering a non-cooperative intruder (an aircraft operating under visual flight rules without ADS-B or an active transponder) the UAS must rely on the radar system to provide the intruders location, velocity, and heading information. As many UAS have limited climb and descent performance, vertical position andor vertical rate errors make it difficult to determine whether an intruder will pass above or below them. To account for that, there is a proposal by RTCA Special Committee 228 to prohibit guidance systems from providing vertical guidance to regain well-clear to UAS in an encounter with a non-cooperative intruder unless their radar system has vertical position error below 175 feet (95) and vertical velocity errors below 200 fpm (95). Two sets of fast-time parametric studies was conducted, each with 54000 pairwise encounters between a UAS and non-cooperative intruder to determine the suitability of offering vertical guidance to regain well clear to a UAS in the presence of radar sensor noise. The UAS was not allowed to maneuver until it received well-clear recovery guidance. The maximum severity of the loss of well-clear was logged and used as the primary indicator of the separation achieved by the UAS. One set of 54000 encounters allowed the UAS to maneuver either vertically or horizontally, while the second permitted horizontal maneuvers, only. Comparing the two data sets allowed researchers to see the effect of allowing vertical guidance to a UAS for a particular encounter and vertical rate error. Study results show there is a small reduction in the average severity of a loss of well-clear when vertical maneuvers
Vieira, Daniel; Krems, Roman V.
2017-02-01
We present an approach using a combination of coupled channel scattering calculations with a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate constants for non-adiabatic transitions in inelastic atomic collisions to variations of the underlying adiabatic interaction potentials. Using this approach, we improve the previous computations of the rate constants for the fine-structure transitions in collisions of O({}3{P}j) with atomic H. We compute the error bars of the rate constants corresponding to 20% variations of the ab initio potentials and show that this method can be used to determine which of the individual adiabatic potentials are more or less important for the outcome of different fine-structure changing collisions.
Robust estimation of error covariance functions in GRACE gravity field determination
Behzadpour, Saniya; Mayer-Gürr, Torsten; Flury, Jakob
2016-04-01
The accurate modelling of the stochastic behaviour of the GRACE mission observations is an important task in the time variable gravity field determination. After fitting a model in the least-squares sense, it is necessary to determine whether all the necessary model assumptions, i.e., independence, normality, and homoscedasticity of the residuals, are valid before performing inference. Checking the model assumptions for the range rate residuals, it has been concluded that one of the major problems in the range rate observations is the outliers in the data. One way to deal with this problem is to implement a robust estimation procedure to dampen the effect of observations that would be highly influential if least squares were used. In addition to insensitivity to outliers, such a procedure tends to leave the residuals associated with outliers large, therefore making the identification of outliers much easier. Implementation of this procedure using robust error covariance functions, comparison of different robust estimators, e.g., Huber's and Tukey's estimators, and assessing the detected outliers with respect to temporal and spatial patterns are discussed.
Study on Cell Error Rate of a Satellite ATM System Based on CDMA
赵彤宇; 张乃通
2003-01-01
In this paper, the cell error rate (CER) of a CDMA-based satellite ATM system is analyzed. Two fading models, i.e. the partial fading model and the total fading model are presented according to multi-path propagation fading and shadow effect. Based on the total shadow model, the relation of CER vs. the number of subscribers at various elevations under 2D-RAKE receiving and non-diversity receiving is got. The impact on cell error rate with pseudo noise (PN) code length is also considered. The result that the maximum likelihood combination of multi-path signal would not improve the system performance when multiple access interference (MAI) is small, on the contrary the performance may be even worse is abtained.
Novel relations between the ergodic capacity and the average bit error rate
Yilmaz, Ferkan
2011-11-01
Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.
LaPorte, Gerald M; Stephens, Joseph C; Beuchel, Amanda K
2010-01-01
The examination of printing defects, or imperfections, found on printed or copied documents has been recognized as a generally accepted approach for linking questioned documents to a common source. This research paper will highlight the results from two mutually exclusive studies. The first involved the examination and characterization of printing defects found in a controlled production run of 500,000 envelopes bearing text and images. It was concluded that printing defects are random occurrences and that morphological differences can be used to identify variations within the same production batch. The second part incorporated a blind study to assess the error rate of associating randomly selected envelopes from different retail locations to a known source. The examination was based on the comparison of printing defects in the security patterns found in some envelopes. The results demonstrated that it is possible to associate envelopes to a common origin with a 0% error rate.
Smadi, Mahmoud A.
2012-12-06
In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.
Novel Relations between the Ergodic Capacity and the Average Bit Error Rate
Yilmaz, Ferkan
2012-01-01
Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their...
Chen, Jian; Dutton, Zachary; Lazarus, Richard; Guha, Saikat
2011-01-01
The quantum states of two laser pulses---coherent states---are never mutually orthogonal, making perfect discrimination impossible. Even so, coherent states can achieve the ultimate quantum limit for capacity of a classical channel, the Holevo capacity. Attaining this requires the receiver to make joint-detection measurements on long codeword blocks, optical implementations of which remain unknown. We report the first experimental demonstration of a joint-detection receiver, demodulating quaternary pulse-position-modulation (PPM) codewords at a word error rate of up to 40% (2.2 dB) below that attained with direct-detection, the largest error-rate improvement over the standard quantum limit reported to date. This is accomplished with a conditional nulling receiver, which uses optimized-amplitude coherent pulse nulling, single photon detection and quantum feedforward. We further show how this translates into coding complexity improvements for practical PPM systems, such as in deep-space communication. We antici...
Phase Error Modeling and Its Impact on Precise Orbit Determination of GRACE Satellites
Jia Tu
2012-01-01
Full Text Available Limiting factors for the precise orbit determination (POD of low-earth orbit (LEO satellite using dual-frequency GPS are nowadays mainly encountered with the in-flight phase error modeling. The phase error is modeled as a systematic and a random component each depending on the direction of GPS signal reception. The systematic part and standard deviation of random part in phase error model are, respectively, estimated by bin-wise mean and standard deviation values of phase postfit residuals computed by orbit determination. By removing the systematic component and adjusting the weight of phase observation data according to standard deviation of random component, the orbit can be further improved by POD approach. The GRACE data of 1–31 January 2006 are processed, and three types of orbit solutions, POD without phase error model correction, POD with mean value correction of phase error model, and POD with phase error model correction, are obtained. The three-dimensional (3D orbit improvements derived from phase error model correction are 0.0153 m for GRACE A and 0.0131 m for GRACE B, and the 3D influences arisen from random part of phase error model are 0.0068 m and 0.0075 m for GRACE A and GRACE B, respectively. Thus the random part of phase error model cannot be neglected for POD. It is also demonstrated by phase postfit residual analysis, orbit comparison with JPL precise science orbit, and orbit validation with KBR data that the results derived from POD with phase error model correction are better than another two types of orbit solutions generated in this paper.
A minimum-error, energy-constrained neural code is an instantaneous-rate code.
Johnson, Erik C; Jones, Douglas L; Ratnam, Rama
2016-04-01
Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals.
Reducing error rates in straintronic multiferroic nanomagnetic logic by pulse shaping.
Munira, Kamaram; Xie, Yunkun; Nadri, Souheil; Forgues, Mark B; Fashami, Mohammad Salehi; Atulasimha, Jayasimha; Bandyopadhyay, Supriyo; Ghosh, Avik W
2015-06-19
Dipole-coupled nanomagnetic logic (NML), where nanomagnets (NMs) with bistable magnetization states act as binary switches and information is transferred between them via dipole-coupling and Bennett clocking, is a potential replacement for conventional transistor logic since magnets dissipate less energy than transistors when they switch in a logic circuit. Magnets are also 'non-volatile' and hence can store the results of a computation after the computation is over, thereby doubling as both logic and memory-a feat that transistors cannot achieve. However, dipole-coupled NML is much more error-prone than transistor logic at room temperature [Formula: see text] because thermal noise can easily disrupt magnetization dynamics. Here, we study a particularly energy-efficient version of dipole-coupled NML known as straintronic multiferroic logic (SML) where magnets are clocked/switched with electrically generated mechanical strain. By appropriately 'shaping' the voltage pulse that generates strain, we show that the error rate in SML can be reduced to tolerable limits. We describe the error probabilities associated with various stress pulse shapes and discuss the trade-off between error rate and switching speed in SML.The lowest error probability is obtained when a 'shaped' high voltage pulse is applied to strain the output NM followed by a low voltage pulse. The high voltage pulse quickly rotates the output magnet's magnetization by 90° and aligns it roughly along the minor (or hard) axis of the NM. Next, the low voltage pulse produces the critical strain to overcome the shape anisotropy energy barrier in the NM and produce a monostable potential energy profile in the presence of dipole coupling from the neighboring NM. The magnetization of the output NM then migrates to the global energy minimum in this monostable profile and completes a 180° rotation (magnetization flip) with high likelihood.
A forward error correction technique using a high-speed, high-rate single chip codec
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
1989-01-01
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
Comparing Response Times and Error Rates in a Simultaneous Masking Paradigm
F Hermens
2014-08-01
Full Text Available In simultaneous masking, performance on a foveally presented target is impaired by one or more flanking elements. Previous studies have demonstrated strong effects of the grouping of the target and the flankers on the strength of masking (e.g., Malania, Herzog & Westheimer, 2007. These studies have predominantly examined performance by measuring offset discrimination thresholds as a measure of performance, and it is therefore unclear whether other measures of performance provide similar outcomes. A recent study, which examined the role of grouping on error rates and response times in a speeded vernier offset discrimination task, similar to that used by Malania et al. (2007, suggested a possible dissociation between the two measures, with error rates mimicking threshold performance, but response times showing differential results (Panis & Hermens, 2014. We here report the outcomes of three experiments examining this possible dissociation, and demonstrate an overall similar pattern of results for error rates and response times across a broad range of mask layouts. Moreover, the pattern of results in our experiments strongly correlates with threshold performance reported earlier (Malania et al., 2007. Our results suggest that outcomes in a simultaneous masking paradigm do not critically depend on the outcome measure used, and therefore provide evidence for a common underlying mechanism.
Determination of error measurement by means of the basic magnetization curve
Lankin, M. V.; Lankin, A. M.
2016-04-01
The article describes the implementation of the methodology for determining the error search by means of the basic magnetization curve of electric cutting machines. The basic magnetization curve of the integrated operation of the electric characteristic allows one to define a fault type. In the process of measurement the definition of error calculation of the basic magnetization curve plays a major role as in accuracies of a particular characteristic can have a deleterious effect.
Bit error rate testing of fiber optic data links for MMIC-based phased array antennas
Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.
1990-06-01
The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.
Error resilient H.264/AVC Video over Satellite for low Packet Loss Rates
Aghito, Shankar Manuel; Forchhammer, Søren; Andersen, Jakob Dahl
2007-01-01
The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The ne...... Inmarsat BGAN system at 256 kbit/s is used as test case. This systems operates at low loss rates guaranteeing a packet loss rate of not more than 10~3. For high-end applications as 'reporter-in-the-field' live broadcast, it is crucial to obtain high quality without increasing delay....
Error Baseline Rates of Five Sequencing Strategies Used for RNA Virus Population Characterization
2017-01-31
viral evolution , including the emergence of resistance to medical 21 countermeasures. To explore the sources of error in the determination of the...pressure on evolution of 36 viral genotypes and phenotypes, optimizing vaccine design, and identifying virus genome 37 mutations that may lead to...NGS) technologies have had a dramatic impact on the 43 experimental analysis of viral genetic diversity. With NGS, a virus population’s genomic 44
Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin
2016-09-01
To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.
Computational Medical Apportionment Determination for Impairment Ratings
Artz, Jerry; Thompson, Marten; Alchemy, Md, John; Penn, Md, Daniel
2017-01-01
Unique computational techniques are used to calculate apportionment percentages for Whole Person Impairment (WPI) Ratings for workers with job-related injuries/illnesses. This interdisciplinary project includes collaboration among physicists, engineers, and concerned medical professionals. Medical providers are often asked to medically determine multiple contributing factors to disease states (e.g. diabetes, obesity, arthritis, and prior injury) in the context of personal injury as it pertains to permanent impairment. The process of making this determination is referred to as ``apportionment''. The economic value of apportionment is far reaching and represents a significant impact to all stakeholders in the injury resolution and settlement arena. The process of apportionment is necessary to assign monetary value for the stakeholders when an injury occurs. The ultimate trier-of-fact is the judicial system. The medical provider's role in this capacity is to apply known medical scientific knowledge and present it in a format that is objective and reproducible for the stakeholders. In this presentation the traditional challenges of apportionment will be outlined, and a novel approach creating mathematical bounding and modeling of pathology-weighted data sets will be presented.
Determining the Numeracy and Algebra Errors of Students in a Two-Year Vocational School
Akyüz, Gözde
2015-01-01
The goal of this study was to determine the mathematics achievement level in basic numeracy and algebra concepts of students in a two-year program in a technical vocational school of higher education and determine the errors that they make in these topics. The researcher developed a diagnostic mathematics achievement test related to numeracy and…
Determining the Numeracy and Algebra Errors of Students in a Two-Year Vocational School
Akyüz, Gözde
2015-01-01
The goal of this study was to determine the mathematics achievement level in basic numeracy and algebra concepts of students in a two-year program in a technical vocational school of higher education and determine the errors that they make in these topics. The researcher developed a diagnostic mathematics achievement test related to numeracy and…
26 CFR 1.1312-8 - Law applicable in determination of error.
2010-04-01
... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Law applicable in determination of error. 1.1312-8 Section 1.1312-8 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED..., exclusion, omission, allowance, disallowance, recognition, or nonrecognition is determined under...
EVALUATION OF ERRORS IN PARAMETERS DETERMINATION FOR THE EARTH HIGHLY ANOMALOUS GRAVITY FIELD
L. P. Staroseltsev
2016-05-01
Full Text Available Subject of Research.The paper presents research results and the simulation of errors caused by determining the Earth gravity field parameters for regions with high segmentation of gravity field. The Kalman filtering estimation of determining errors is shown. Method. Simulation model for the realization of inertial geodetic method for determining the Earth gravity field parameters is proposed. The model is based on high-precision inertial navigation system (INS at the free gyro and high-accuracy satellite system. The possibility of finding the conformity between the determined and stochastic approaches in gravity potential modeling is shown with the example of a point-mass model. Main Results. Computer simulation shows that for determining the Earth gravity field parameters gyro error model can be reduced to two significant indexes, one for each gyro. It is also shown that for regions with high segmentation of gravity field point-mass model can be used. This model is a superposition of attractive and repulsive masses - the so-called gravitational dipole. Practical Relevance. The reduction of gyro error model can reduce the dimension of the Kalman filter used in the integrated system, which decreases the computation time and increases the visibility of the state vector. Finding the conformity between the determined and stochastic approaches allows the application of determined and statistical terminology. Also it helps to create a simulation model for regions with high segmentation of gravity field.
Demirhan Erdal
2015-01-01
Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.
Error-rate performance analysis of incremental decode-and-forward opportunistic relaying
Tourki, Kamel
2011-06-01
In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.
On the symmetric α-stable distribution with application to symbol error rate calculations
Soury, Hamza
2016-12-24
The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.
Stability Comparison of Recordable Optical Discs—A Study of Error Rates in Harsh Conditions
Slattery, Oliver; Lu, Richang; Zheng, Jian; Byers, Fred; Tang, Xiao
2004-01-01
The reliability and longevity of any storage medium is a key issue for archivists and preservationists as well as for the creators of important information. This is particularly true in the case of digital media such as DVD and CD where a sufficient number of errors may render the disc unreadable. This paper describes an initial stability study of commercially available recordable DVD and CD media using accelerated aging tests under conditions of increased temperature and humidity. The effect of prolonged exposure to direct light is also investigated and shown to have an effect on the error rates of the media. Initial results show that high quality optical media have very stable characteristics and may be suitable for long-term storage applications. However, results also indicate that significant differences exist in the stability of recordable optical media from different manufacturers. PMID:27366630
Learning High-Dimensional Markov Forest Distributions: Analysis of Error Rates
Tan, Vincent Y F; Willsky, Alan S
2010-01-01
The problem of learning forest-structured discrete graphical models from i.i.d. samples is considered. An algorithm based on pruning of the Chow-Liu tree through adaptive thresholding is proposed. It is shown that this algorithm is both structurally consistent and risk consistent and the error probability of structure learning decays faster than any polynomial in the number of samples under fixed model size. For the high-dimensional scenario where the size of the model d and the number of edges k scale with the number of samples n, sufficient conditions on (n,d,k) are given for the algorithm to satisfy structural and risk consistencies. In addition, the extremal structures for learning are identified; we prove that the independent (resp. tree) model is the hardest (resp. easiest) to learn using the proposed algorithm in terms of error rates for structure learning.
SUN Liuquan; ZHENG Zhongguo
1999-01-01
A central limit theorem for the integrated square error (ISE)of the kernel hazard rate estimators is obtained based on left truncated and right censored data.An asymptotic representation of the mean integrated square error(MISE) for the kernel hazard rate estimators is also presented.
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-01-01
Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially
A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
Investigation of determinism in heart rate variability
Gomes, M. E. D.; Souza, A. V. P.; Guimarães, H. N.; Aguirre, L. A.
2000-06-01
The article searches for the possible presence of determinism in heart rate variability (HRV) signals by using a new approach based on NARMA (nonlinear autoregressive moving average) modeling and free-run prediction. Thirty-three 256-point HRV time series obtained from Wistar rats submitted to different autonomic blockade protocols are considered, and a collection of surrogate data sets are generated from each one of them. These surrogate sequences are assumed to be nondeterministic and therefore they may not be predictable. The original HRV time series and related surrogates are submitted to NARMA modeling and prediction. Special attention has been paid to the problem of stationarity. The results consistently show that the surrogate data sets cannot be predicted better than the trivial predictor—the mean—while most of the HRV control sequences are predictable to a certain degree. This suggests that the normal HRV signals have a deterministic signature. The HRV time series derived from the autonomic blockade segments of the experimental protocols do not show the same predictability performance, albeit the physiological interpretation is not obvious. These results have important implications to the methodology of HRV analysis, indicating that techniques from nonlinear dynamics and deterministic chaos may be applied to elicit more information about the autonomic modulation of the cardiovascular activity.
An overview of the sources of error in sound power determination using the intensity technique
Jacobsen, Finn
1997-01-01
An overview of the most important sources of error in sound power determination with the sound intensity technique is presented. It is concluded that the method is convenient, accurate and reliable provided that a few simple rules are observed. (C) 1997 Elsevier Science Ltd.......An overview of the most important sources of error in sound power determination with the sound intensity technique is presented. It is concluded that the method is convenient, accurate and reliable provided that a few simple rules are observed. (C) 1997 Elsevier Science Ltd....
Examining rating quality in writing assessment: rater agreement, error, and accuracy.
Wind, Stefanie A; Engelhard, George
2012-01-01
The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments.
Hawkins, C Matthew; Hall, Seth; Zhang, Bin; Towbin, Alexander J
2014-10-01
The purpose of this study was to evaluate and compare textual error rates and subtypes in radiology reports before and after implementation of department-wide structured reports. Randomly selected radiology reports that were generated following the implementation of department-wide structured reports were evaluated for textual errors by two radiologists. For each report, the text was compared to the corresponding audio file. Errors in each report were tabulated and classified. Error rates were compared to results from a prior study performed prior to implementation of structured reports. Calculated error rates included the average number of errors per report, average number of nongrammatical errors per report, the percentage of reports with an error, and the percentage of reports with a nongrammatical error. Identical versions of voice-recognition software were used for both studies. A total of 644 radiology reports were randomly evaluated as part of this study. There was a statistically significant reduction in the percentage of reports with nongrammatical errors (33 to 26%; p = 0.024). The likelihood of at least one missense omission error (omission errors that changed the meaning of a phrase or sentence) occurring in a report was significantly reduced from 3.5 to 1.2% (p = 0.0175). A statistically significant reduction in the likelihood of at least one comission error (retained statements from a standardized report that contradict the dictated findings or impression) occurring in a report was also observed (3.9 to 0.8%; p = 0.0007). Carefully constructed structured reports can help to reduce certain error types in radiology reports.
Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti
2014-06-01
Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.
Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise
Christensen, Lars P.B.
2005-01-01
Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...... can be achieved by a Multiple-Input Multiple- Output(MIMO) whitening filter followed by a traditional BCJR algorithm. The Gauss-Markov noise model provides a reasonable approximation for co-channel interference, making it an interesting single-user detector for many multiuser communication systems...
Mogull, Scott A
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).
31 CFR 359.14 - How are composite rates determined?
2010-07-01
... composite interest rates.): Composite rate = {(Fixed rate ÷ 2) + Semiannual inflation rate + } × 2. 2 2 Example for I bonds issued May 2002-October 2002: Fixed rate = 2.00% Inflation rate = 0.28% Composite rate... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are composite rates determined...
Determination of rate distributions from kinetic experiments.
Steinbach, P J; Chu, K.; Frauenfelder, H; Johnson, J B; Lamb, D C; Nienhaus, G. U.; Sauke, T B; Young, R. D.
1992-01-01
Rate processes in proteins are often not adequately described by simple exponential kinetics. Instead of modeling the kinetics in the time domain, it can be advantageous to perform a numerical inversion leading to a rate distribution function f(lambda). The features observed in f(lambda) (number, positions, and shapes of peaks) can then be interpreted. We discuss different numerical techniques for obtaining rate distribution functions, with special emphasis on the maximum entropy method. Exam...
The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.
Fadaee, Shannon B; Migliaccio, Americo A
2016-04-01
The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation.
Modified Golden Codes for Improved Error Rates Through Low Complex Sphere Decoder
K.Thilagam
2013-05-01
Full Text Available n recent years, the golden codes have proven to ex hibit a superior performance in a wireless MIMO (Multiple Input Multiple Output scenario than any other code. However, a serious limitation associated with it is its increased deco ding complexity. This paper attempts to resolve this challenge through suitable modification of gol den code such that a less complex sphere decoder could be used without much compromising the error rates. In this paper, a minimum polynomial equation is introduced to obtain a reduc ed golden ratio (RGR number for golden code which demands only for a low complexity decodi ng procedure. One of the attractive approaches used in this paper is that the effective channel matrix has been exploited to perform a single symbol wise decoding instead of grouped sy mbols using a sphere decoder with tree search algorithm. It has been observed that the low decoding complexity of O (q 1.5 is obtained against conventional method of O (q 2.5 . Simulation analysis envisages that in addition t o reduced decoding, improved error rates is also obta ined.
Threshold based Bit Error Rate Optimization in Four Wave Mixing Optical WDM Systems
Er. Karamjeet Kaur
2016-07-01
Full Text Available Optical communication is communication at a distance using light to carry information which can be performed visually or by using electronic devices. The trend toward higher bit rates in light-wave communication has interest in dispersion-shifted fibre to reduce dispersion penalties. At an equivalent time optical amplifiers have exaggerated interest in wavelength multiplexing. This paper describes optical communication systems where we discuss different optical multiplexing schemes. The effect of channel power depletion due to generation of Four Wave Mixing waves and the effect of FWM cross talk on the performance of a WDM receiver has been studied in this paper. The main focus is to minimize Bit Error Rate to increase the QoS of the optical WDM system.
Error rate performance of Hybrid QAM-FSK in OFDM systems exhibiting low PAPR
LATIF Asma; GOHAR Nasir D.
2009-01-01
Multicarrier transmission systems like orthogonal frequency division multiplexing (OFDM) support high data rate and generally require no equalization at the receiver, making them simple and efficient. This paper studies the design and performance analysis of a hybrid modulation system derived from multi-frequency and MQAM signals, employed in OFDM. This modulation scheme has better bit error rate (BER) performance and exhibits low PAPR. The proposed hybrid modulator reduces PAPR while keep-ing the OFDM transceiver design simple, as it does not require any side information or a little side Information (only one bit) to be sent and is efficient for arbitrary number of subcarriers. The results of the implementations are compared with those of conventional OFDM system.
Bit Error Rate Measurements on Prototype Digital Optical Links for the CMS Tracker
Azevedo, C S; Faccio, F; Gill, Karl; Grabit, Robert; Jensen, Fredrik Bjorn Henning; Vasey, François
2000-01-01
Two prototypes of a four-channel digital optical link to be used for the slow control of the CMS Tracker detector were tested for bit error rate, at transmission rates of 40 Mbit/s and 80 Mbit/s. Both prototypes used the same transmitter and PIN photodiode, but different receiver configurations: one used COTS electronics, whilst the other used a digital receiver ASIC developed at CERN in a 0.25 mm process. Both links proved to be well within the specification limits even after the ASIC receiver was irradiated to a 20 Mrad total dose, and the PIN photodiode to a 6.5á1014 n/cm2 fluence.
无
2001-01-01
Partly linear regression model is useful in practice, but littleis investigated in the literature to adapt it to the real data which are dependent and conditionally heteroscedastic. In this paper, the estimators of the regression components are constructed via local polynomial fitting and the large sample properties are explored. Under certain mild regularities, the conditions are obtained to ensure that the estimators of the nonparametric component and its derivatives are consistent up to the convergence rates which are optimal in the i.i.d. case, and the estimator of the parametric component is root-n consistent with the same rate as for parametric model. The technique adopted in the proof differs from that used and corrects the errors in the reference by Hamilton and Truong under i.i.d. samples.
Celik, Cihangir
Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano
Determination of rate distributions from kinetic experiments.
Steinbach, P J; Chu, K; Frauenfelder, H; Johnson, J B; Lamb, D C; Nienhaus, G U; Sauke, T B; Young, R D
1992-01-01
Rate processes in proteins are often not adequately described by simple exponential kinetics. Instead of modeling the kinetics in the time domain, it can be advantageous to perform a numerical inversion leading to a rate distribution function f(lambda). The features observed in f(lambda) (number, positions, and shapes of peaks) can then be interpreted. We discuss different numerical techniques for obtaining rate distribution functions, with special emphasis on the maximum entropy method. Examples are given for the application of these techniques to flash photolysis data of heme proteins.
Fountain, Emily D; Pauli, Jonathan N; Reid, Brendan N; Palsbøll, Per J; Peery, M Zachariah
2016-07-01
Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown. Here, we estimated genotyping error rates in SNPs genotyped with double digest RAD sequencing from Mendelian incompatibilities in known mother-offspring dyads of Hoffman's two-toed sloth (Choloepus hoffmanni) across a range of coverage and sequence quality criteria, for both reference-aligned and de novo-assembled data sets. Genotyping error rates were more sensitive to coverage than sequence quality and low coverage yielded high error rates, particularly in de novo-assembled data sets. For example, coverage ≥5 yielded median genotyping error rates of ≥0.03 and ≥0.11 in reference-aligned and de novo-assembled data sets, respectively. Genotyping error rates declined to ≤0.01 in reference-aligned data sets with a coverage ≥30, but remained ≥0.04 in the de novo-assembled data sets. We observed approximately 10- and 13-fold declines in the number of loci sampled in the reference-aligned and de novo-assembled data sets when coverage was increased from ≥5 to ≥30 at quality score ≥30, respectively. Finally, we assessed the effects of genotyping coverage on a common population genetic application, parentage assignments, and showed that the proportion of incorrectly assigned maternities was relatively high at low coverage. Overall, our results suggest that the trade-off between sample size and genotyping error rates be considered prior to building sequencing libraries, reporting genotyping error rates become standard practice, and that effects of genotyping errors on inference be evaluated in restriction-enzyme-based SNP studies.
Flament, O. [CEA Bruyeres-le-Chatel, DIF, 91 (France); Baggio, J. [CESTA - CEA Centre d' Etudes Scientifiques et Techniques d' Aquitaine, 33 - Le Barp (France)
2010-03-15
This paper describes the main features of the accelerated test procedures used to determine reliability data of microelectronics devices used in terrestrial environment.This paper focuses on the high energy particle test that could be performed through spallation neutron source or quasi-mono-energetic neutron or proton. Improvements of standards are illustrated with respect to the state of the art of knowledge in radiation effects and scaling down of microelectronics technologies. (authors)
Fix, MK; Volken, W; Frei, D; Terribilini, D; Dal Pra, A; Schmuecking, M; Manser, P [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern (Switzerland)
2014-06-15
Purpose: Treatment plan evaluations in radiotherapy are currently ignoring the dosimetric impact of setup uncertainties. The determination of the robustness for systematic errors is rather computational intensive. This work investigates interpolation schemes to quantify the robustness of treatment plans for systematic errors in terms of efficiency and accuracy. Methods: The impact of systematic errors on dose distributions for patient treatment plans is determined by using the Swiss Monte Carlo Plan (SMCP). Errors in all translational directions are considered, ranging from −3 to +3 mm in mm steps. For each systematic error a full MC dose calculation is performed leading to 343 dose calculations, used as benchmarks. The interpolation uses only a subset of the 343 calculations, namely 9, 15 or 27, and determines all dose distributions by trilinear interpolation. This procedure is applied for a prostate and a head and neck case using Volumetric Modulated Arc Therapy with 2 arcs. The relative differences of the dose volume histograms (DVHs) of the target and the organs at risks are compared. Finally, the interpolation schemes are used to compare robustness of 4- versus 2-arcs in the head and neck treatment plan. Results: Relative local differences of the DVHs increase for decreasing number of dose calculations used in the interpolation. The mean deviations are <1%, 3.5% and 6.5% for a subset of 27, 15 and 9 used dose calculations, respectively. Thereby the dose computation times are reduced by factors of 13, 25 and 43, respectively. The comparison of the 4- versus 2-arcs plan shows a decrease in robustness; however, this is outweighed by the dosimetric improvements. Conclusion: The results of this study suggest that the use of trilinear interpolation to determine the robustness of treatment plans can remarkably reduce the number of dose calculations. This work was supported by Varian Medical Systems. This work was supported by Varian Medical Systems.
The Determinants of Early Refractive Error on School-Going Chinese Children
K. Jayaraman
2016-04-01
Full Text Available Refractive error is a common social issue in every walks of human life, and its prevalence recorded the highest among Chinese population, particularly among people living in southern China, Hong Kong, Thailand, Singapore, and Malaysia. Refractive error is the simplest disorder to treat and supposed to cost the effective health care intervention. The present study included 168 Chinese school-going children aged 10 to 12 years; they were selected from different schools of urban Malaysia. It was surprising to see that 112 (66.7% children had the early onset of refractive error; refractive error was also detected late among the primary school or secondary school students. The findings revealed that the determinants of refractive error among Chinese children were personal achievements and machine dependence. The possible reasons for the above significant factors emerged could be attributed to the inbuilt culture and traditions of Chinese parents who insist that their children should be hardworking and focus on school subjects so that their parents allow them to use luxury electronic devices.
Crowding and eccentricity determine reading rate.
Pelli, Denis G; Tillman, Katharine A; Freeman, Jeremy; Su, Michael; Berger, Tracey D; Majaj, Najib J
2007-10-26
Bouma's law of crowding predicts an uncrowded central window through which we can read and a crowded periphery through which we cannot. The old discovery that readers make several fixations per second, rather than a continuous sweep across the text, suggests that reading is limited by the number of letters that can be acquired in one fixation, without moving one's eyes. That "visual span" has been measured in various ways, but remains unexplained. Here we show (1) that the visual span is simply the number of characters that are not crowded and (2) that, at each vertical eccentricity, reading rate is proportional to the uncrowded span. We measure rapid serial visual presentation (RSVP) reading rate for text, in both original and scrambled word order, as a function of size and spacing at central and peripheral locations. As text size increases, reading rate rises abruptly from zero to maximum rate. This classic reading rate curve consists of a cliff and a plateau, characterized by two parameters, critical print size and maximum reading rate. Joining two ideas from the literature explains the whole curve. These ideas are Bouma's law of crowding and Legge's conjecture that reading rate is proportional to visual span. We show that Legge's visual span is the uncrowded span predicted by Bouma's law. This result joins Bouma and Legge to explain reading rate's dependence on letter size and spacing. Well-corrected fluent observers reading ordinary text with adequate light are limited by letter spacing (crowding), not size (acuity). More generally, it seems that this account holds true, independent of size, contrast, and luminance, provided only that text contrast is at least four times the threshold contrast for an isolated letter. For any given spacing, there is a central uncrowded span through which we read. This uncrowded span model explains the shape of the reading rate curve. We test the model in several ways. We use a "silent substitution" technique to measure the
Kumar, Sudhir; Datta, D; Sharma, S D; Chourasiya, G; Babu, D A R; Sharma, D N
2014-04-01
Verification of the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm(3) is one of the recommended methods for measuring RAKR of HDR (192)Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR (192)Ir source strength measurement.
Ahmed, Qasim Zeeshan
2015-02-01
In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.
Austin, Peter C
2009-01-01
... the statistical significance of the treatment effect. We conducted a series of Monte Carlo simulations to examine the impact of ignoring the matched nature of the propensity-score matched sample on Type I error rates, coverage of confidence...
Enzymatic spectrophotometric reaction rate determination of aspartame
Trifković Kata T.
2015-01-01
Full Text Available Aspartame is an artificial sweetener of low caloric value (approximately 200 times sweeter than sucrose. Aspartame is currently permitted for use in food and beverage production in more than 90 countries. The application of aspartame in food products requires development of rapid, inexpensive and accurate method for its determination. The new assay for determination of aspartame was based on set of reactions that are catalyzed by three different enzymes: α-chymotrypsin, alcohol oxidase and horseradish peroxidase. Optimization of the proposed method was carried out for: (i α-chymotrypsin activity; (ii time allowed for α-chymotrypsin action, (iii temperature. Evaluation of the developed method was done by determining aspartame content in “diet” drinks, as well as in artificial sweetener pills. [Projekat Ministarstva nauke Republike Srbije, br. III46010
Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying
Fareed, Muhammad Mehboob
2014-06-01
In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.
Error-rate performance analysis of incremental decode-and-forward opportunistic relaying
Tourki, Kamel
2010-10-01
In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.
Evaluate the Word Error Rate of Binary Block Codes with Square Radius Probability Density Function
Chen, Xiaogang; Gu, Jian; Yang, Hongkui
2007-01-01
The word error rate (WER) of soft-decision-decoded binary block codes rarely has closed-form. Bounding techniques are widely used to evaluate the performance of maximum-likelihood decoding algorithm. But the existing bounds are not tight enough especially for low signal-to-noise ratios and become looser when a suboptimum decoding algorithm is used. This paper proposes a new concept named square radius probability density function (SR-PDF) of decision region to evaluate the WER. Based on the SR-PDF, The WER of binary block codes can be calculated precisely for ML and suboptimum decoders. Furthermore, for a long binary block code, SR-PDF can be approximated by Gamma distribution with only two parameters that can be measured easily. Using this property, two closed-form approximative expressions are proposed which are very close to the simulation results of the WER of interesting.
KAPASITAS KANAL DAN BIT ERROR RATE SISTEM D-MIMO DALAM VARIASI SPASIAL DAERAH CAKUPAN
Nyoman Gunantara
2009-05-01
Full Text Available Kemajuan teknologi komunikasi, dikembangkan sistem D-MIMO (Distributed MIMO yang sebelumnya telah digunakan sistem C-MIMO (Conventional co-located MIMO. Sistem C-MIMO menyebabkan penggunaan spektrummenjadi efisien, daya pancar berkurang, dan kapasitas kanal meningkat.Dengan sistem D-MIMO jarak antara pemancar dan penerima dapat diperpendek, macrodiversity dan adanya daerah cakupan layanan. Pada tulisan ini akan diteliti tentang kapasitas kanal dan Bit Error Rate (BER pada variasi spasial daerah cakupan. Penelitian tersebut dilakukan pada kapasitas kanal teoritis dan BER dengan teknik waterfilling.Kapasitas kanal dan kinerja BER pada sistem D-MIMO pada variasi spasial daerah cakupan tergantung dari konfigurasi sistem D-MIMO. Lokasi penerima yang dekat port antena pemancar mempunyai kapasitas kanal yanglebih besar tetapi memiliki kinerja BER yang lebih buruk.
Threshold-Based Bit Error Rate for Stopping Iterative Turbo Decoding in a Varying SNR Environment
Mohamad, Roslina; Harun, Harlisya; Mokhtar, Makhfudzah; Adnan, Wan Azizun Wan; Dimyati, Kaharudin
2017-01-01
Online bit error rate (BER) estimation (OBE) has been used as a stopping iterative turbo decoding criterion. However, the stopping criteria only work at high signal-to-noise ratios (SNRs), and fail to have early termination at low SNRs, which contributes to an additional iteration number and an increase in computational complexity. The failure of the stopping criteria is caused by the unsuitable BER threshold, which is obtained by estimating the expected BER performance at high SNRs, and this threshold does not indicate the correct termination according to convergence and non-convergence outputs (CNCO). Hence, in this paper, the threshold computation based on the BER of CNCO is proposed for an OBE stopping criterion (OBEsc). From the results, OBEsc is capable of terminating early in a varying SNR environment. The optimum number of iterations achieved by the OBEsc allows huge savings in decoding iteration number and decreasing the delay of turbo iterative decoding.
SITE project. Phase 1: Continuous data bit-error-rate testing
Fujikawa, Gene; Kerczewski, Robert J.
1992-01-01
The Systems Integration, Test, and Evaluation (SITE) Project at NASA LeRC encompasses a number of research and technology areas of satellite communications systems. Phase 1 of this project established a complete satellite link simulator system. The evaluation of proof-of-concept microwave devices, radiofrequency (RF) and bit-error-rate (BER) testing of hardware, testing of remote airlinks, and other tests were performed as part of this first testing phase. This final report covers the test results produced in phase 1 of the SITE Project. The data presented include 20-GHz high-power-amplifier testing, 30-GHz low-noise-receiver testing, amplitude equalization, transponder baseline testing, switch matrix tests, and continuous-wave and modulated interference tests. The report also presents the methods used to measure the RF and BER performance of the complete system. Correlations of the RF and BER data are summarized to note the effects of the RF responses on the BER.
The New Tapered Fiber Connector and the Test of Its Error Rate and Coupling Characteristics
Qinggui Hu
2017-01-01
Full Text Available Since the fiber core is very small, the communication fiber connector requires high precision. In this paper, the effect of lateral deviation on coupling efficiency of fiber connector is analyzed. Then, considering the fact that optical fiber is generally used in pairs, one for transmitting data and the other for receiving, the novel directional tapered communication optical fiber connector is designed. In the new connector, the structure of the fiber head is tapered according to the signal transmission direction. In order to study the performance of the new connector, several samples were made in the laboratory of corporation CDSEI and two testing experiments were done. The experiment results show that compared with the traditional connector, for the same lateral deviation, the coupling efficiency of the tapered connector is higher and the error rate is lower.
Tanaka, Ken'ichiro; Murashige, Sunao
2012-01-01
We present the convergence rates and the explicit error bounds of Hill's method, which is a numerical method for computing the spectra of ordinary differential operators with periodic coefficients. This method approximates the operator by a finite dimensional matrix. On the assumption that the operator is selfadjoint, it is shown that, under some conditions, we can obtain the convergence rates of eigenvalues with respect to the dimension and the explicit error bounds. Numerical examples demon...
Music determines heart rate variability of singers
Björn eVickhoff
2013-07-01
Full Text Available Choir singing is known to promote wellbeing. One reason for this may be that singing demands a slower than normal respiration which may in turn affect heart activity. Coupling of heart rate variability (HRV to respiration is called Respiratory sinus arrhythmia (RSA. This coupling has a subjective as well as a biologically soothing effect, and it is beneficial for cardiovascular function. RSA is seen to be more marked during slow-paced breathing and at lower respiration rates (0.1 Hz and below. In this study, we investigate how singing, which is a form of guided breathing, affects HRV and RSA. The study comprises a group of healthy 18 year olds of mixed gender. The subjects are asked to; (1 hum a single tone and breathe whenever they need to; (2 sing a hymn with free, unguided breathing; and (3 sing a slow mantra and breathe solely between phrases. Heart rate (HR is measured continuously during the study. The study design makes it possible to compare above three levels of song structure. In a separate case study, we examine five individuals performing singing tasks (1-(3. We collect data with more advanced equipment, simultaneously recording HR, respiration, skin conductance and finger temperature. We show how song structure, respiration and heart rate are connected. Unison singing of regular song structures makes the hearts of the singers accelerate and decelerate simultaneously. Implications concerning the effect on wellbeing and health are discussed as well as the question how this inner entrainment may affect perception and behavior.
Posttranscriptional expression regulation: what determines translation rates?
Regina Brockmann
2007-03-01
Full Text Available Recent analyses indicate that differences in protein concentrations are only 20%-40% attributable to variable mRNA levels, underlining the importance of posttranscriptional regulation. Generally, protein concentrations depend on the translation rate (which is proportional to the translational activity, TA and the degradation rate. By integrating 12 publicly available large-scale datasets and additional database information of the yeast Saccharomyces cerevisiae, we systematically analyzed five factors contributing to TA: mRNA concentration, ribosome density, ribosome occupancy, the codon adaptation index, and a newly developed "tRNA adaptation index." Our analysis of the functional relationship between the TA and measured protein concentrations suggests that the TA follows Michaelis-Menten kinetics. The calculated TA, together with measured protein concentrations, allowed us to estimate degradation rates for 4,125 proteins under standard conditions. A significant correlation to recently published degradation rates supports our approach. Moreover, based on a newly developed scoring system, we identified and analyzed genes subjected to the posttranscriptional regulation mechanism, translation on demand. Next we applied these findings to publicly available data of protein and mRNA concentrations under four stress conditions. The integration of these measurements allowed us to compare the condition-specific responses at the posttranscriptional level. Our analysis of all 62 proteins that have been measured under all four conditions revealed proteins with very specific posttranscriptional stress response, in contrast to more generic responders, which were nonspecifically regulated under several conditions. The concept of specific and generic responders is known for transcriptional regulation. Here we show that it also holds true at the posttranscriptional level.
A new ambiguity acceptance test threshold determination method with controllable failure rate
Wang, Lei; Verhagen, Sandra
2015-04-01
The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.
LU; Zudi
2001-01-01
［1］Engle, R. F., Granger, C. W. J., Rice, J. et al., Semiparametric estimates of the relation between weather and electricity sales, Journal of the American Statistical Association, 1986, 81: 310.［2］Heckman, N. E., Spline smoothing in partly linear models, Journal of the Royal Statistical Society, Ser. B, 1986, 48: 244.［3］Rice, J., Convergence rates for partially splined models, Statistics & Probability Letters, 1986, 4: 203.［4］Chen, H., Convergence rates for parametric components in a partly linear model, Annals of Statistics, 1988, 16: 136.［5］Robinson, P. M., Root-n-consistent semiparametric regression, Econometrica, 1988, 56: 931.［6］Speckman, P., Kernel smoothing in partial linear models, Journal of the Royal Statistical Society, Ser. B, 1988, 50: 413.［7］Cuzick, J., Semiparametric additive regression, Journal of the Royal Statistical Society, Ser. B, 1992, 54: 831.［8］Cuzick, J., Efficient estimates in semiparametric additive regression models with unknown error distribution, Annals of Statistics, 1992, 20: 1129.［9］Chen, H., Shiau, J. H., A two-stage spline smoothing method for partially linear models, Journal of Statistical Planning & Inference, 1991, 27: 187.［10］Chen, H., Shiau, J. H., Data-driven efficient estimators for a partially linear model, Annals of Statistics, 1994, 22: 211.［11］Schick, A., Root-n consistent estimation in partly linear regression models, Statistics & Probability Letters, 1996, 28: 353.［12］Hamilton, S. A., Truong, Y. K., Local linear estimation in partly linear model, Journal of Multivariate Analysis, 1997, 60: 1.［13］Mills, T. C., The Econometric Modeling of Financial Time Series, Cambridge: Cambridge University Press, 1993, 137.［14］Engle, R. F., Autoregressive conditional heteroscedasticity with estimates of United Kingdom inflation, Econometrica, 1982, 50: 987.［15］Bera, A. K., Higgins, M. L., A survey of ARCH models: properties of estimation and testing, Journal of Economic
A. G. Obolenskov
2016-09-01
Full Text Available Subject of Research. The paper presents theoretical and experimental analysis of dependence of the determination error of a modulated optical signal under intense background illumination on the value of mutual shift of two current-voltage characteristics if using a double synthesized aperture on multiscan position-sensitive detector. Method. The studies have been carried out on a specially designed setup, that allows scanning photosensitive area of multiscan position-sensitive detector by an optical beam that imitates intense solar illumination. At the same time the position error of determination of weak modulated optical signal coordinate is measured at different relative position of signal and background illumination, and background power. Main Results. Experimental studies have confirmed the theoretical conclusions. It is shown that the use of double synthesized aperture of multiscan position-sensitive detector with the voltage shift of the current-voltage characteristics equal to 0.4 V enables to reduce position determination error of a weak modulated signal by an order of magnitude. Practical Relevance. Research results have opened the opportunity of accuracy increase for position-sensitive systems operating under background illuminations exceeding the level of information optical signal.
Fatemeh Vizeshfar
2015-06-01
Full Text Available Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic information, self report faults, etiology of medication error and researcher observations. The results showed that nurses’ faults in pediatric wards were 51/6% and in adults wards were 47/4%. The most common faults in adults wards were later or sooner drug administration (48/6%, and administration of drugs without prescription and administering wrong drugs were the most common medication errors in pediatric wards (each one 49/2%. According to researchers’ observations, the medication error rate of 57/9% was rated low in adults wards and the rate of 69/4% in pediatric wards was rated moderate. The most frequent medication errors in both adults and pediatric wards were that nurses didn’t explain the reason and type of drug they were going to administer to patients. Independent T-test showed a significant change in faults observations in pediatric wards (p=0.000 and in adults wards (p=0.000. Several studies have shown medication errors all over the world, especially in pediatric wards. However, by designing a suitable report system and use a multi disciplinary approach, we can be reduced the occurrence of medication errors and its negative consequences.
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
Influence of measurement errors on temperature-based death time determination.
Hubig, Michael; Muggenthaler, Holger; Mall, Gita
2011-07-01
Temperature-based methods represent essential tools in forensic death time determination. Empirical double exponential models have gained wide acceptance because they are highly flexible and simple to handle. The most established model commonly used in forensic practice was developed by Henssge. It contains three independent variables: the body mass, the environmental temperature, and the initial body core temperature. The present study investigates the influence of variations in the input data (environmental temperature, initial body core temperature, core temperature, time) on the standard deviation of the model-based estimates of the time since death. Two different approaches were used for calculating the standard deviation: the law of error propagation and the Monte Carlo method. Errors in environmental temperature measurements as well as deviations of the initial rectal temperature were identified as major sources of inaccuracies in model based death time estimation.
Krishnan, Prabu; Sriram Kumar, D.
2014-12-01
Free-space optical communication (FSO) is emerging as a captivating alternative to work out the hindrances in the connectivity problems. It can be used for transmitting signals over common lands and properties that the sender or receiver may not own. The performance of an FSO system depends on the random environmental conditions. The bit error rate (BER) performance of differential phase shift keying FSO system is investigated. A distributed strong atmospheric turbulence channel with pointing error is considered for the BER analysis. Here, the system models are developed for single-input, single-output-FSO (SISO-FSO) and single-input, multiple-output-FSO (SIMO-FSO) systems. The closed-form mathematical expressions are derived for the average BER with various combining schemes in terms of the Meijer's G function.
Casey P Durand
Full Text Available INTRODUCTION: Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. METHODS: A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. RESULTS: In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. CONCLUSIONS: Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.
Soft error rate estimations of the Kintex-7 FPGA within the ATLAS Liquid Argon (LAr) Calorimeter
Wirthlin, M. J.; Takai, H.; Harding, A.
2014-01-01
This paper summarizes the radiation testing performed on the Xilinx Kintex-7 FPGA in an effort to determine if the Kintex-7 can be used within the ATLAS Liquid Argon (LAr) Calorimeter. The Kintex-7 device was tested with wide-spectrum neutrons, protons, heavy-ions, and mixed high-energy hadron environments. The results of these tests were used to estimate the configuration ram and block ram upset rate within the ATLAS LAr. These estimations suggest that the configuration memory will upset at a rate of 1.1 × 10-10 upsets/bit/s and the bram memory will upset at a rate of 9.06 × 10-11 upsets/bit/s. For the Kintex 7K325 device, this translates to 6.85 × 10-3 upsets/device/s for configuration memory and 1.49 × 10-3 for block memory.
IMPROVING THE PERFORMANCE AND REDUCING BIT ERROR RATE ON WIRELESS DEEP FADING ENVIRONMENT RECEIVERS
K. Jayanthi
2014-01-01
Full Text Available One of the major challenges in wireless communication system is increasing complexity and reducing performance in detecting the received digital information in indoor and outdoor Environments. Consequently to overcome this problem we analyze the delay performance of a multiuser with perfect channel state information transmitting data on deep fading environment. In this proposed system, the Wireless Deep Fading Environment (WDFE creation for causing a Nakagami Multipath Fading Channel of fading figure ‘m’ is used to rectify the delay performance over the existing Rayleigh fading channel. In this WDFE receivers received coherent, synchronized, secured and improved signal strength of information using a Multiuser Coherent Joint Diversity (MCJD with Multi Carrier-Code Division Multiple Access (MC-CDMA. The MCJD in ‘M’ branch of antennas are used to reduce the Bit Error Rate (BER and MC-CDMA method is used to improve the performance. Therefore, in this proposed system we accompany with MCJD and MC-CDMA is very good transceiver for next generation wireless system of an existing 3G wireless system. Overall, this experimental results show improved performance in different multiuser wireless systems under different multipath fading conditions.
Masud, M A; Rahman, M A
2010-01-01
In the beginning of 21st century there has been a dramatic shift in the market dynamics of telecommunication services. The transmission from base station to mobile or downlink transmission using M-ary Quadrature Amplitude modulation (QAM) and Quadrature phase shift keying (QPSK) modulation schemes are considered in Wideband-Code Division Multiple Access (W-CDMA) system. We have done the performance analysis of these modulation techniques when the system is subjected to Additive White Gaussian Noise (AWGN) and multipath Rayleigh fading are considered in the channel. The research has been performed by using MATLAB 7.6 for simulation and evaluation of Bit Error Rate (BER) and Signal-To-Noise Ratio (SNR) for W-CDMA system models. It is shows that the analysis of Quadrature phases shift key and 16-ary Quadrature Amplitude modulations which are being used in wideband code division multiple access system, Therefore, the system could go for more suitable modulation technique to suit the channel quality, thus we can d...
Noble, Jack H.; Warren, Frank M.; Labadie, Robert F.; Dawant, Benoit; Fitzpatrick, J. Michael
2007-03-01
In cochlear implant surgery an electrode array is permanently implanted to stimulate the auditory nerve and allow deaf people to hear. Current surgical techniques require wide excavation of the mastoid region of the temporal bone and one to three hours time to avoid damage to vital structures. Recently a far less invasive approach has been proposed-percutaneous cochlear access, in which a single hole is drilled from skull surface to the cochlea. The drill path is determined by attaching a fiducial system to the patient's skull and then choosing, on a pre-operative CT, an entry point and a target point. The drill is advanced to the target, the electrodes placed through the hole, and a stimulator implanted at the surface of the skull. The major challenge is the determination of a safe and effective drill path, which with high probability avoids specific vital structures-the facial nerve, the ossicles, and the external ear canal-and arrives at the basal turn of the cochlea. These four features lie within a few millimeters of each other, the drill is one millimeter in diameter, and errors in the determination of the target position are on the order of 0.5mm root-mean square. Thus, path selection is both difficult and critical to the success of the surgery. This paper presents a method for finding optimally safe and effective paths while accounting for target positioning error.
Monetary models and exchange rate determination: The Nigerian ...
Monetary models and exchange rate determination: The Nigerian evidence. ... income levels and real interest rate differentials provide better forecasts of the naira-US dollar ... in this regard is that monetary policy should be positively predicted.
Loyka, Sergey; Gagnon, Francois
2009-01-01
Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...
THE REAL EXCHANGE RATE DETERMINATION: EMPIRICAL EVIDENCE FROM MALAYSIA
WONG HOCK TSEN
2014-01-01
This study examines the real exchange rate determination in Malaysia. The result of the autoregressive distributed lag approach shows that an increase in the real interest rate differential, productivity differential, the real oil price or reserve differential will lead to an appreciation of the real exchange rate in the long run. The real oil price and reserve differential are important in the real exchange rate determination. The dynamic ordinary least squares (DOLS) estimator shows about t...
Tamaki, Hirofumi; Satoh, Hiroki; Hori, Satoko; Sawada, Yasufumi
2012-01-01
Confusion of drug names is one of the most common causes of drug-related medical errors. A similarity measure of drug names, "vwhtfrag", was developed to discriminate whether drug name pairs are likely to cause confusion errors, and to provide information that would be helpful to avoid errors. The aim of the present study was to evaluate and improve vwhtfrag. Firstly, we evaluated the correlation of vwhtfrag with subjective similarity or error rate of drug name pairs in psychological experiments. Vwhtfrag showed a higher correlation to subjective similarity (college students: r=0.84) or error rate than did other conventional similarity measures (htco, cos1, edit). Moreover, name pairs that showed coincidences of the initial character strings had a higher subjective similarity than those which had coincidences of the end character strings and had the same vwhtfrag. Therefore, we developed a new similarity measure (vwhtfrag+), in which coincidence of initial character strings in name pairs is weighted by 1.53 times over coincidence of end character strings. Vwhtfrag+ showed a higher correlation to subjective similarity than did unmodified vwhtfrag. Further studies appear warranted to examine in detail whether vwhtfrag+ has superior ability to discriminate drug name pairs likely to cause confusion errors.
Tanaka, Ken'ichiro
2012-01-01
We present the convergence rates and the explicit error bounds of Hill's method, which is a numerical method for computing the spectra of ordinary differential operators with periodic coefficients. This method approximates the operator by a finite dimensional matrix. On the assumption that the operator is selfadjoint, it is shown that, under some conditions, we can obtain the convergence rates of eigenvalues with respect to the dimension and the explicit error bounds. Numerical examples demonstrate that we can verify these conditions using Gershgorin's theorem for some real problems. Main theorems are proved using the Dunford integrals which project an eigenvector to the corresponding eigenspace.
Step angles to reduce the north-finding error caused by rate random walk with fiber optic gyroscope.
Wang, Qin; Xie, Jun; Yang, Chuanchuan; He, Changhong; Wang, Xinyue; Wang, Ziyu
2015-10-20
We study the relationship between the step angles and the accuracy of north finding with fiber optic gyroscopes. A north-finding method with optimized step angles is proposed to reduce the errors caused by rate random walk (RRW). Based on this method, the errors caused by both angle random walk and RRW are reduced by increasing the number of positions. For when the number of positions is even, we proposed a north-finding method with symmetric step angles that can reduce the error caused by RRW and is not affected by the azimuth angles. Experimental results show that, compared with the traditional north-finding method, the proposed methods with the optimized step angles and the symmetric step angles can reduce the north-finding errors by 67.5% and 62.5%, respectively. The method with symmetric step angles is not affected by the azimuth angles and can offer consistent high accuracy for any azimuth angles.
Spatio-temporal filtering for determination of common mode error in regional GNSS networks
Bogusz, Janusz; Gruszczynski, Maciej; Figurski, Mariusz; Klos, Anna
2015-04-01
The spatial correlation between different stations for individual components in the regional GNSS networks seems to be significant. The mismodelling in satellite orbits, the Earth orientation parameters (EOP), largescale atmospheric effects or satellite antenna phase centre corrections can all cause the regionally correlated errors. This kind of GPS time series errors are referred to as common mode errors (CMEs). They are usually estimated with the regional spatial filtering, such as the "stacking". In this paper, we show the stacking approach for the set of ASG-EUPOS permanent stations, assuming that spatial distribution of the CME is uniform over the whole region of Poland (more than 600 km extent). The ASG-EUPOS is a multifunctional precise positioning system based on the reference network designed for Poland. We used a 5- year span time series (2008-2012) of daily solutions in the ITRF2008 from Bernese 5.0 processed by the Military University of Technology EPN Local Analysis Centre (MUT LAC). At the beginning of our analyses concerning spatial dependencies, the correlation coefficients between each pair of the stations in the GNSS network were calculated. This analysis shows that spatio-temporal behaviour of the GPS-derived time series is not purely random, but there is the evident uniform spatial response. In order to quantify the influence of filtering using CME, the norms L1 and L2 were determined. The values of these norms were calculated for the North, East and Up components twice: before performing the filtration and after stacking. The observed reduction of the L1 and L2 norms was up to 30% depending on the dimension of the network. However, the question how to define an optimal size of CME-analysed subnetwork remains unanswered in this research, due to the fact that our network is not extended enough.
Determining The Factors Causing Human Error Deficiencies At A Public Utility Company
F. W. Badenhorst
2004-11-01
Full Text Available According to Neff (1977, as cited by Bergh (1995, the westernised culture considers work important for industrial mental health. Most individuals experience work positively, which creates a positive attitude. Should this positive attitude be inhibited, workers could lose concentration and become bored, potentially resulting in some form of human error. The aim of this research was to determine the factors responsible for human error events, which lead to power supply failures at Eskom power stations. Proposals were made for the reduction of these contributing factors towards improving plant performance. The target population was 700 panel operators in Eskom’s Power Generation Group. The results showed that factors leading to human error can be reduced or even eliminated. Opsomming Neff (1977 soos aangehaal deur Bergh (1995, skryf dat in die westerse kultuur werk belangrik vir bedryfsgeestesgesondheid is. Die meeste persone ervaar werk as positief, wat ’n positiewe gesindheid kweek. Indien hierdie positiewe gesindheid geïnhibeer word, kan dit lei tot ’n gebrek aan konsentrasie by die werkers. Werkers kan verveeld raak en dit kan weer lei tot menslike foute. Die doel van hierdie navorsing is om die faktore vas te stel wat tot menslike foute lei, en wat bydra tot onderbrekings in kragvoorsiening by Eskom kragstasies. Voorstelle is gemaak vir die vermindering van hierdie bydraende faktore ten einde die kragaanleg se prestasie te verbeter. Die teiken-populasie was 700 paneel-operateurs in die Kragopwekkingsgroep by Eskom. Die resultate dui daarop dat die faktore wat aanleiding gee tot menslike foute wel verminder, of geëlimineer kan word.
Kim, Jihye
2010-01-01
In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…
Inferring the determinants of protein evolutionary rates in mammals.
Zou, Yang; Shao, Xiaojian; Dong, Dong
2016-06-15
Understanding the determinants of protein evolutionary rates is one of the most fundamental evolutionary questions. Previous studies have revealed that many biological variables are tightly associated with protein evolutionary rates in mammals. However, the dominant role of these biological variables and their combinatorial effects to evolutionary rates of mammalian proteins are still less understood. In this work, we derived a quantitative model to correlate protein evolutionary rates with the levels of these variables. The result showed that only a small number of variables are necessary to accurately predict protein evolutionary rates, among which miRNA regulation plays the most important role. Our result suggested that biological variables are extensively interrelated and suffer from hidden redundancies in determining protein evolutionary rates. Various variables should be considered in a natural ensemble to comprehensively assess the determinants of protein evolutionary rate.
Determination of Royalty Rates in the International Technology Transfer Contracts
Kapitsa, Yu.; Aralova, N.
2015-01-01
The existing approaches used in determination of the royalty rates for technology transfer contracts and based on the experience of research institutions of the National Academy of Sciences of Ukraine, research organizations and universities in Europe and USA were reviewed. The analysis of the existing rates has been made as well as recommendations on determination of the royalty rates for technology transfer contracts between research institutions and foreign and domestic partners have been ...
Determination of Royalty Rates in the International Technology Transfer Contracts
Kapitsa, Yu.
2015-03-01
Full Text Available The existing approaches used in determination of the royalty rates for technology transfer contracts and based on the experience of research institutions of the National Academy of Sciences of Ukraine, research organizations and universities in Europe and USA were reviewed. The analysis of the existing rates has been made as well as recommendations on determination of the royalty rates for technology transfer contracts between research institutions and foreign and domestic partners have been worked out.
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Fountain, Emily D.; Pauli, Jonathan N.; Reid, Brendan N.; Palsboll, Per J.; Peery, M. Zachariah
2016-01-01
Restriction-enzyme-based sequencing methods enable the genotyping of thousands of single nucleotide polymorphism (SNP) loci in nonmodel organisms. However, in contrast to traditional genetic markers, genotyping error rates in SNPs derived from restriction-enzyme-based methods remain largely unknown.
The Determinants of Exchange Rate Regimes in Emerging Market Economies
Mehmet Guclu
2008-01-01
The choice of exchange rate regime has become one of the most important issues one more time in many economies after the financial crises in recent years. In the wake of the financial crises, many countries, especially emerging market economies, opted for floating exchange rate regimes by forsaking the pegged regimes. Consequently, an old debate on the choice and determinants of exchange rate regimes has been triggered. Economists have started to debate what appropriate exchange rate regime f...
Zandbergen, Paul A.; Green, Joseph W.
2007-01-01
Background The widespread availability of powerful tools in commercial geographic information system (GIS) software has made address geocoding a widely employed technique in spatial epidemiologic studies. Objective The objective of this study was to determine the effect of the positional error in geocoding on the analysis of exposure to traffic-related air pollution of children at school locations. Methods For a case study of Orange County, Florida, we determined the positional error of geoco...
Soury, Hamza
2015-01-07
This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].
Soury, Hamza
2014-06-01
This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox\\'s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE.
Sharp Threshold Detection Based on Sup-norm Error rates in High-dimensional Models
Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl
focused almost exclusively on estimation errors in stronger norms. We show that this sup-norm bound can be used to distinguish between zero and non-zero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent...
Sharp threshold detection based on sup-norm error rates in high-dimensional models
Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl
2017-01-01
almost exclusively on ℓ1 and ℓ2 estimation errors. We show that this sup-norm bound can be used to distinguish between zero and non-zero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent variable...
Groen, Yvonne; Mulder, Lambertus J. M.; Wijers, Albertus A.; Minderaa, Ruud B.; Althaus, Monika
2009-01-01
Attention Deficit Hyperactivity Disorder (ADHD) is a developmental disorder that has previously been related to a decreased sensitivity to errors and feedback. Supplementary to the traditional performance measures, this study uses autonomic measures to study this decreased sensitivity in ADHD and th
Roy, Urmimala; Register, Leonard F; Banerjee, Sanjay K
2016-01-01
Spin-transfer-torque random access memory (STT-RAM) is a promising candidate for the next-generation of random-access-memory due to improved scalability, read-write speeds and endurance. However, the write pulse duration must be long enough to ensure a low write error rate (WER), the probability that a bit will remain unswitched after the write pulse is turned off, in the presence of stochastic thermal effects. WERs on the scale of 10$^{-9}$ or lower are desired. Within a macrospin approximation, WERs can be calculated analytically using the Fokker-Planck method to this point and beyond. However, dynamic micromagnetic effects within the bit can affect and lead to faster switching. Such micromagnetic effects can be addressed via numerical solution of the stochastic Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation. However, determining WERs approaching 10$^{-9}$ would require well over 10$^{9}$ such independent simulations, which is infeasible. In this work, we explore calculation of WER using "rare event en...
Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)
2001-01-01
A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.
Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan
2017-01-01
Background Medication errors have received extensive attention in recent decades and are of significant concern to healthcare organisations globally. Medication errors occur frequently, and adverse events associated with medications are one of the largest causes of harm to hospitalised patients....... Reviews have suggested that up to 50% of the adverse events in the medication process may be preventable. Thus the medication process is an important means to improve safety. Purpose The objective of this study was to evaluate the effectiveness of two automated medication systems in reducing...... the medication administration error rate in comparison with current practice. Material and methods This was a controlled before and after study with follow-up after 7 and 14 months. The study was conducted in two acute medical hospital wards. Two automated medication systems were tested: (1) automated dispensing...
Growth rate determinations from radiocarbon in bamboo corals (genus Keratoisis)
Farmer, Jesse R.; Robinson, Laura F.; Hönisch, Bärbel
2015-11-01
Radiocarbon (14C) measurements are an important tool for determining growth rates of bamboo corals, a cosmopolitan group of calcitic deep-sea corals. Published growth rate estimates for bamboo corals are highly variable, with potential environmental or ecological drivers of this variability poorly constrained. Here we systematically investigate the application of 14C for growth rate determinations in bamboo corals using 55 14C dates on the calcite and organic fractions of six bamboo corals (identified as Keratoisis sp.) from the western North Atlantic Ocean. Calcite 14C measurements on the distal surface of these corals and five previously published bamboo corals exhibit a strong one-to-one relationship with the 14C of dissolved inorganic carbon (DI14C) in ambient seawater (r2=0.98), confirming the use of Keratoisis sp. calcite 14C as a proxy for seawater 14C activity. Radial growth rates determined from 14C age-depth regressions, 14C plateau tuning and bomb 14C reference chronologies range from 12 to 78 μm y-1, in general agreement with previously published radiometric growth rates. We document potential biases to 14C growth rate determinations resulting from water mass variability, bomb radiocarbon, secondary infilling (ontogeny), and growth rate nonlinearity. Radial growth rates for Keratoisis sp. specimens do not correlate with ambient temperature, suggesting that additional biological and/or environmental factors may influence bamboo coral growth rates.
NATREX AND DETERMINATION OF REAL EXCHANGE RATE OF RMB
Holger van Eden; LIU Bin; Gerbert Romyn; YANG Xiaoguang
2001-01-01
]n this paper, we analyze the movements of the real exchange rate in China.Our empirical evidence shows that the purchasing power parity does not hold in the long run, and the real exchange rate is non-stationary. The decomposition of the movements of the real exchange rate also indicates that real shocks result in permanent changes in the real exchange rate whereas nominal shocks just result in temporary changes.Based on these facts,we apply NATREX approach to analyze the detrmination of real exchange rate in China.The NATREX model successfully explains the evolution of the real exchange rate in China:The real exchange rate in the long run is determined by the real fundamentals including the productivity at home and abroad,and the domestic time preference.In the long run,a rise of the domestic productivity significantly appreciates the real exchange rate whereas a rise of the foreign productivity significantly appreciates the real exchange rate whereas a rise of the fireign productivity or a rise of the domestic time preference significantly depreciates the real exchange rate.We also find that the estimated NATREX rate converges to the steady-state exchange rate in the long run.Although there are short-run fluctuations around the NATREX rate,the real exchange rate will converge to the NATREX rate over time.
Determinants of Commercial banks' interest rate spreads in Botswana
The profit they ... detrimental to financial development and economic growth as credit would not be flowing to ... sectors thus letting interest rates to be market determined. ..... positive relationship was expected as taxes increase costs for banks.
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
Error rates, PCR recombination, and sampling depth in HIV-1 whole genome deep sequencing.
Zanini, Fabio; Brodin, Johanna; Albert, Jan; Neher, Richard A
2016-12-27
Deep sequencing is a powerful and cost-effective tool to characterize the genetic diversity and evolution of virus populations. While modern sequencing instruments readily cover viral genomes many thousand fold and very rare variants can in principle be detected, sequencing errors, amplification biases, and other artifacts can limit sensitivity and complicate data interpretation. For this reason, the number of studies using whole genome deep sequencing to characterize viral quasi-species in clinical samples is still limited. We have previously undertaken a large scale whole genome deep sequencing study of HIV-1 populations. Here we discuss the challenges, error profiles, control experiments, and computational test we developed to quantify the accuracy of variant frequency estimation.
Effects of body mass index and step rate on pedometer error in a free-living environment.
Tyo, Brian M; Fitzhugh, Eugene C; Bassett, David R; John, Dinesh; Feito, Yuri; Thompson, Dixie L
2011-02-01
Pedometers could provide great insights into walking habits if they are found to be accurate for people of all weight categories. the purposes of this study were to determine whether the New Lifestyles NL-2000 (NL) and the Digi-Walker SW-200 (DW) yield similar daily step counts as compared with the StepWatch 3 (SW) in a free-living environment and to determine whether pedometer error is influenced by body mass index (BMI) and speed of walking. The SW served as the criterion because of its accuracy across a range of speeds and BMI categories. Slow walking was defined as ≤80 steps per minute. fifty-six adults (mean ± SD: age = 32.7 ± 14.5 yr) wore the devices for 7 d. There were 20 normal weight, 18 overweight, and 18 obese participants. A two-way repeated-measures ANOVA was performed to determine whether BMI and device were related to number of steps counted per day. Stepwise linear regressions were performed to determine what variables contributed to NL and DW error. both the NL and the DW recorded fewer steps than the SW (P < 0.001). In the normal weight and overweight groups, error was similar for the DW and NL. In the obese group, the DW underestimated steps more than the NL (P < 0.01). DW error was positively related to BMI and percentage of slow steps, whereas NL error was linearly related to percentage of slow steps. A surprising finding was that many healthy, community-dwelling adults accumulated a large percentage of steps through slow walking. the NL is more accurate than the DW for obese individuals, and neither pedometer is accurate for people who walk slowly. Researchers and practitioners must weigh the strengths and limitations of step counters before making an informed decision about which device to use.
Determination of the of rate cross slip of screw dislocations
Vegge, Tejs; Rasmussen, Torben; Leffers, Torben;
2000-01-01
The rate for cross slip of screw dislocations during annihilation of screw dipoles in copper is determined by molecular dynamics simulations. The temperature dependence of the rate is seen to obey an Arrhenius behavior in the investigated temperature range: 225-375 K. The activation energy...
Neutron and gamma ray total dose rate determination using anisn
Amin, E.; Ashoub, N.; Elkady, A.
1994-07-01
The National Center for Nuclear Safety and Radiation Control is in the process of acquiring a computer software library based mainly on internationally widely used computer codes. These codes are to be used as basic tools in safety analysis and radiation control and risk assessment. A complementary part of this activity is to validate the computer codes and set standard procedures with the limits of confidence for the different areas of applications of the one or the other code or set of codes. The present work has been then initiated in order to develop a standard shielding calculating procedure to be applied for the different applications of interest to the center, namely: shielding of nuclear installations, such as the ET-RR-1 reactor, the gamma unit, nuclear accelerator, radiotherapy units; shielding of nuclear sources (mainly neutron and gamma sources); shielding of transportation containers. In developing such a standard method, the sources of error to the final results (i.e. the dose rate and dose rate distribution) have to been identified and the error to be quantified. Through applying the developed procedure to benchmark PWR shielding problems, and to documented results for fission sources in water and concrete, the levels of confidence of the procedure in different application areas have been set.
Neutron and gamma ray total dose rate determination using ANISN
Amin, E.; Elkady, A. [Atomic Energy Authority, Cairo (Egypt). National Center for Nuclear Safety and Radiation Control; Ashoub, N. [Nuclear Research Center, Cairo (Egypt)
1994-07-01
The National Center for Nuclear Safety and Radiation Control is in the process of acquiring a computer software library based mainly on internationally widely used computer codes. These codes are to be used as basic tools in safety analysis and radiation control and risk assessment. A complementary part of this activity is to validate the computer codes and set standard procedures with the limits of confidence for the different areas of applications of the one or the other code or set of codes. The present work has been then initiated in order to develop a standard shielding calculating procedure to be applied for the different applications of interest to the center, namely: shielding of nuclear installations, such as the ET-RR-1 reactor, the gamma unit, nuclear accelerator, radiotherapy units; shielding of nuclear sources (mainly neutron and gamma sources); shielding of transportation containers. In developing such a standard method, the sources of error to the final results (i.e. the dose rate and dose rate distribution) have to be identified and the error to be quantified. Through applying the developed procedure to benchmark PWR shielding problems, and to documented results for fission sources in water and concrete, the levels of confidence of the procedure in different application areas have been set. (author).
A Determination of the Rate of Change of G
1975-02-01
RATE OF CHANGE OF G Thomas C. Van...TITLE AND SUBTITLE A Determination Of The Rate Of Change Of G 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...related to the rate of change of G. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER
Bit-error-rate testing of high-power 30-GHz traveling-wave tubes for ground-terminal applications
Shalkhauser, Kurt A.
1987-01-01
Tests were conducted at NASA Lewis to measure the bit-error-rate performance of two 30-GHz 200-W coupled-cavity traveling-wave tubes (TWTs). The transmission effects of each TWT on a band-limited 220-Mbit/s SMSK signal were investigated. The tests relied on the use of a recently developed digital simulation and evaluation system constructed at Lewis as part of the 30/20-GHz technology development program. This paper describes the approach taken to test the 30-GHz tubes and discusses the test data. A description of the bit-error-rate measurement system and the adaptations needed to facilitate TWT testing are also presented.
Allen, Gregory; Edmonds, Larry D.; Swift, Gary; Carmichael, Carl; Tseng, Chen Wei; Heldt, Kevin; Anderson, Scott Arlo; Coe, Michael
2010-01-01
We present a test methodology for estimating system error rates of Field Programmable Gate Arrays (FPGAs) mitigated with Triple Modular Redundancy (TMR). The test methodology is founded in a mathematical model, which is also presented. Accelerator data from 90 nm Xilins Military/Aerospace grade FPGA are shown to fit the model. Fault injection (FI) results are discussed and related to the test data. Design implementation and the corresponding impact of multiple bit upset (MBU) are also discussed.
INVESTIGATING THE DETERMINANTS OF LONG-RUN SOVEREIGN RATING
Emilian - Constantin MIRICESCU
2014-10-01
Full Text Available The significance of sovereign rating for local and international investors is essential because in recent period many countries had problems concerning the payment of public loans. In most European Union countries government debt to GDP ratio exceeds the Maastricht ceiling and investors may be cautious at sovereign rating modifying. This paper focuses on long-run sovereign rating assigned by Standard & Poor’s for European Union countries. We will use the regression analysis in order to investigate quantitative and qualitative determinants of long-run sovereign rating.
Bit Error Rate Due to Misalignment of Earth Station Antenna Pointing to Satellite
Wahyu Pamungkas
2010-04-01
Full Text Available One problem causing reduction of energy in satellite communications system is the misalignment of earth station antenna pointing to satellite. Error in pointing would affect the quality of information signal to energy bit in earth station. In this research, error in pointing angle occurred only at receiver (Rx antenna, while the transmitter (Tx antennas precisely point to satellite. The research was conducted towards two satellites, namely TELKOM-1 and TELKOM-2. At first, measurement was made by directing Tx antenna precisely to satellite, resulting in an antenna pattern shown by spectrum analyzer. The output from spectrum analyzers is drawn with the right scale to describe swift of azimuth and elevation pointing angle towards satellite. Due to drifting from the precise pointing, it influenced the received link budget indicated by pattern antenna. This antenna pattern shows reduction of power level received as a result of pointing misalignment. As a conclusion, the increasing misalignment of pointing to satellite would affect in the reduction of received signal parameters link budget of down-link traffic.
Gartia, R.K.; Ingotombi, S.; Singh, Th.S.C.; Mazumdar, P.S. (Manipur Univ. (India). Dept. of Physics)
1991-01-14
In this paper precise estimation of the systematic error involved in the determination of the activation energy of a non-first-order thermoluminescence (TL) peak by using the two-heating-rates method (which is strictly valid for a first-order peak) has been made. A new method analogous to this method is proposed, which involves both the peak temperature and peak intensity. The systematic errors involved in both these methods are found to be within the experimental error which one generally encounters in the analysis of TL. The applicability of these findings has been tested by considering a second-order TL peak of limestone. (author).
Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas
Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.
1990-01-01
The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.
Boulesteix Anne-Laure
2009-12-01
Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.
Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link
Matteo Berioli
2007-05-01
Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.
Capacity Versus Bit Error Rate Trade-Off in the DVB-S2 Forward Link
Berioli Matteo
2007-01-01
Full Text Available The paper presents an approach to optimize the use of satellite capacity in DVB-S2 forward links. By reducing the so-called safety margins, in the adaptive coding and modulation technique, it is possible to increase the spectral efficiency at expenses of an increased BER on the transmission. The work shows how a system can be tuned to operate at different degrees of this trade-off, and also the performance which can be achieved in terms of BER/PER, spectral efficiency, and interarrival, duration, strength of the error bursts. The paper also describes how a Markov chain can be used to model the ModCod transitions in a DVB-S2 system, and it presents results for the calculation of the transition probabilities in two cases.
Error Rate Improvement in Underwater MIMO Communications Using Sparse Partial Response Equalization
2006-09-01
λn−kvi(k) vHi (k) (13) θi(n) = n∑ k=1 λn−kvi(k)x (s)H i (k) (14) are the (time averaged) output correlation matrix and the input-output cross...error vector [5] and Ki(n) is the RLS gain defined as αi(n) = x (s) i (n)− cHi (n− 1)vi(n) (17) Ki(n) = Pi(n− 1)vi(n) λi + vHi (n)Pi(n− 1)vi(n) · (18...Using equations 13, 14, and the matrix inversion lemma [5], the inverse correlation matrix Pi(n) can be updated as Pi(n) = [ I−Ki(n) vHi (n) ] Pi(n− 1
Determinants of Sub-Sovereign Government Ratings In Europe
Nicolas JANNONE-BELLOT
2017-02-01
Full Text Available The aim of this paper is to identify the determinantsof the rating assigned to sub-sovereignentities in Germany, Austria, Belgium, France,Italy and Spain, using a total of 92 territorial entitiesfor the 1989-2012 period. Multinomial orderedprobit estimation models were estimatedfor each specifi cation and agency.We conclude that the country’s rating is oneof the most important determinants of regionalgovernment’s ratings with a positive infl uence(as expected, and that the country debt/GDPratio is a stronger determinant for regions thantheir own indebtedness with a negative sign.Other relevant variables are population growthrate, unemployment rate, elderly people weight,regional public expenditure weight and size. Additionally,economic variables, such as country’srating and population growth are more importantto Fitch; whereas budget variables and size variablesare more relevant to Moody’s. Debt variablesand elderly people ratio are more importantto S&P.
The methane rating system to determine coal face methane conditions
Cook, A.P.; van Vuuren, J.J. [Itasca Africa (Pty) Ltd, Johannesburg (South Africa)
2001-07-01
Methane Rating was developed from a need in South Africa to measure coal seam gas contents, as well as emission rates into the cutting zone for mechanical miners. These are then combined and compared to the average and normal conditions to provide a risk assessment tool for continuous miner operations. The last two years have seen widespread acceptance of Methane Rating as a practical and simple means of identifying seam gas contents and emission rates during mining, and of rating the changing methane conditions. The system uses proven direct methods of methane measurement to quantify the contents and emissions, combined with an innovative rating system. Each new result is compared with the expected average or normal conditions to determine its Methane Rating between 1 and 5. The present South African national database of over 340 individual samples from 31 mines shows methane contents can normally be expected between 0,2 m{sup 3}/t and 1,4 m{sub 3}/t, with emission rates during coal cutting of 20 l/t/min to 80 l/t/min. The highest risk rated mines are presently in the Secunda and eastern Witbank areas, with the lowest risk rated mines to the west of Witbank. 6 refs., 9 figs.
The effect of administrative boundaries and geocoding error on cancer rates in California.
Goldberg, Daniel W; Cockburn, Myles G
2012-04-01
Geocoding is often used to produce maps of disease rates from the diagnosis addresses of incident cases to assist with disease surveillance, prevention, and control. In this process, diagnosis addresses are converted into latitude/longitude pairs which are then aggregated to produce rates at varying geographic scales such as Census tracts, neighborhoods, cities, counties, and states. The specific techniques used within geocoding systems have an impact on where the output geocode is located and can therefore have an effect on the derivation of disease rates at different geographic aggregations. This paper investigates how county-level cancer rates are affected by the choice of interpolation method when case data are geocoded to the ZIP code level. Four commonly used areal unit interpolation techniques are applied and the output of each is used to compute crude county-level five-year incidence rates of all cancers in California. We found that the rates observed for 44 out of the 58 counties in California vary based on which interpolation method is used, with rates in some counties increasing by nearly 400% between interpolation methods.
Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal
2016-09-30
Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd.
A Oyekale
2007-12-01
Full Text Available This study used an ECM to analyze the determinants of agricultural land expansion in Nigeria. Results show that at first differencing, Augmented Dickey Fuller test indicated stationarity for all the variables (p< 0.05 and there were 7 cointegrating vectors using Johansen test. The dynamic unrestricted short-run parameters of permanent cropland growth rates (68.62, agricultural production index (10.23, livestock population (0.003, human population (-0.145, other land (-0.265 and cereal cropland growth rate (0.621 have significant impact on agricultural land expansion (p< 0.05. The study recommended that appropriate policies to address the problem of expansion of agricultural land and agricultural production must focus on development of cereal and permanent crop hybrids that are high yielding and resistant to environmental stress, human population control and guided use of land for industrial and urban development, among others.
Oil Prices and Interest Rates: Do They Determine the Exchange Rate?
Law, I. A.; Old, J. L.
1986-01-01
Argues that the relationship between the British pound sterling, interest rates, and oil prices has been overemphasized by economic commentators because they ignored a basic economic theory about the determination of the exchange rate. Provides an example and suggestions for follow up instruction. (Author/JDH)
Proposed test method for determining discharge rates from water closets
Nielsen, V.; Fjord Jensen, T.
At present the rates at which discharge takes place from sanitary appliances are mostly known only in the form of estimated average values. SBI has developed a measuring method enabling determination of the exact rate of discharge from a sanitary appliance as function of time. The methods depends...... on the application of a calibrated measuring vessel, the volume of water in the vessel being measured at a given moment by means of a transducer and recorded by an UV recorder which is able to follow very rapid variations. In the article the apparatus is described in detail, and an example is given...... of the measurements of the rate of discharge from a WC....
Ahmed, Qasim Zeeshan
2013-01-01
In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.
Parkash, Vinita; Fadare, Oluwole; Dewar, Rajan; Nakhleh, Raouf; Cooper, Kumarasen
2017-03-01
A repeat survey of the Association of the Directors of Anatomic and Surgical Pathology, done 10 years after the original was used to assess trends and variability in classifying scenarios as errors, and the preferred post signout report modification for correcting error by the membership of the Association of the Directors of Anatomic and Surgical Pathology. The results were analyzed to inform on whether interpretive amendment rates might act as surrogate measures of interpretive error in pathology. An analyses of the responses indicated that primary level misinterpretations (benign to malignant and vice versa) were universally qualified as error; secondary-level misinterpretations or misclassifications were inconsistently labeled error. There was added variability in the preferred post signout report modification used to correct report alterations. The classification of a scenario as error appeared to correlate with severity of potential harm of the missed call, the perceived subjectivity of the diagnosis, and ambiguity of reporting terminology. Substantial differences in policies for error detection and optimal reporting format were documented between departments. In conclusion, the inconsistency in labeling scenarios as error, disagreement about the optimal post signout report modification for the correction of the error, and variability in error detection policies preclude the use of the misinterpretation amendment rate as a surrogate measure for error in anatomic pathology. There is little change in uniformity of definition, attitudes and perception of interpretive error in anatomic pathology in the last 10 years.
Codon usage determines translation rate in Escherichia coli
Sørensen, Michael Askvad; Kurland, C G; Pedersen, Steen
1989-01-01
We wish to determine whether differences in translation rate are correlated with differences in codon usage or with differences in mRNA secondary structure. We therefore inserted a small DNA fragment in the lacZ gene either directly or flanked by a few frame-shifting bases, leaving the reading...
Determining the Spatially Resolved Mass Outflow Rate in Markarian 573
Revalski, Mitchell; Crenshaw, D. Michael; Fischer, Travis C.; Kraemer, Steven B.; Schmitt, Henrique R.
2017-01-01
We report on current progress in calculating the narrow line region (NLR) mass outflow rate in the Seyfert 2 galaxy Markarian 573. Our goal is to determine the mass outflow rate as a function of distance from the nucleus in 10 nearby Active Galactic Nuclei (AGN) with spatially resolved NLRs. These nearby AGN allow us to study the feeding and feedback of supermassive black holes (SMBHs) that may play an important role in understanding large scale structure, enrichment of the interstellar medium, and coevolution of SMBHs with their host galaxies. Utilizing archival spectra from the Space Telescope Imaging Spectrograph (STIS) on the Hubble Space Telescope (HST) we measured emission line ratios from a wide range of ionized species. Next we used the line ratios to find a reddening correction and determined the physical conditions in the ionized gas using the photoionization code Cloudy. Specifically, we derived the mass of the ionized gas and then estimate the total mass outside of the spectral slit using HST [O III] images. Combined with kinematic models of the outflows we will determine the mass outflow rate and kinetic luminosity as a function of distance from the central AGN. Ultimately, we aim to determine if NLR outflows are effective in regulating AGN feedback by comparing our observed outflow rates with theoretical models.
Error-free 5.1 Tbit/s data generation on a single-wavelength channel using a 1.28 Tbaud symbol rate
Mulvad, Hans Christian Hansen; Galili, Michael; Oxenløwe, Leif Katsuo
2009-01-01
We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER......We demonstrate a record bit rate of 5.1 Tbit/s on a single wavelength using a 1.28 Tbaud OTDM symbol rate, DQPSK data-modulation, and polarisation-multiplexing. Error-free performance (BER...
Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek
2016-07-01
Effective and efficient medication reporting processes are essential in promoting patient safety. Few qualitative studies have explored reporting of medication errors by health professionals, and none have made reference to behavioural theories. The objective was to describe and understand the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE). This was a qualitative study comprising face-to-face, semi-structured interviews within three major medical/surgical hospitals of Abu Dhabi, the UAE. Health professionals were sampled purposively in strata of profession and years of experience. The semi-structured interview schedule focused on behavioural determinants around medication error reporting, facilitators, barriers and experiences. The Theoretical Domains Framework (TDF; a framework of theories of behaviour change) was used as a coding framework. Ethical approval was obtained from a UK university and all participating hospital ethics committees. Data saturation was achieved after interviewing ten nurses, ten pharmacists and nine physicians. Whilst it appeared that patient safety and organisational improvement goals and intentions were behavioural determinants which facilitated reporting, there were key determinants which deterred reporting. These included the beliefs of the consequences of reporting (lack of any feedback following reporting and impacting professional reputation, relationships and career progression), emotions (fear and worry) and issues related to the environmental context (time taken to report). These key behavioural determinants which negatively impact error reporting can facilitate the development of an intervention, centring on organisational safety and reporting culture, to enhance reporting effectiveness and efficiency.
Adiabatic vs. non-adiabatic determination of specific absorption rate of ferrofluids
Natividad, Eva [Instituto de Ciencia de Materiales de Aragon (CSIC-Universidad de Zaragoza), Sede Campus Rio Ebro, Maria de Luna, 3, 50018 Zaragoza (Spain); Castro, Miguel [Instituto de Ciencia de Materiales de Aragon (CSIC-Universidad de Zaragoza), Sede Campus Rio Ebro, Maria de Luna, 3, 50018 Zaragoza (Spain)], E-mail: mcastro@unizar.es; Mediano, Arturo [Grupo de Electronica de Potencia y Microelectronica (GEPM), Instituto de Investigacion en Ingenieria de Aragon (Universidad de Zaragoza), Maria de Luna, 3, 50018 Zaragoza (Spain)
2009-05-15
The measurement of temperature variations in adiabatic conditions allows the determination of the specific absorption rate of magnetic nanoparticles and ferrofluids from the correct incremental expression, SAR=(1/m{sub MNP})C({delta}T/{delta}t). However, when measurements take place in non-adiabatic conditions, one must approximate this expression by SAR{approx}C{beta}/m{sub MNP}, where {beta} is the initial slope of the temperature vs. time curve during alternating field application. The errors arising from the use of this approximation were estimated through several experiments with different isolating conditions, temperature sensors and sample-sensor contacts. It is concluded that small to appreciable errors can appear, which are difficult to infer or control.
Determination of optimal samples for robot calibration based on error similarity
Tian Wei
2015-06-01
Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.
OuYang, Xiaoying; Wang, Ning; Wu, Hua; Li, Zhao-Liang
2010-01-18
Sensitivity analysis of temperature-emissivity separation method commonly applied to hyperspectral data to various sources of errors is performed in this paper. In terms of resulting errors in the process of retrieving surface temperature, results show that: (1) Satisfactory results can be obtained for heterogeneous land surfaces and retrieval error of surface temperature is small enough to be neglected for all atmospheric conditions. (2) Separation of atmospheric downwelling radiance from at-ground radiance is not very sensitive to the uncertainty of column water vapor (WV) in the atmosphere. The errors in land surface temperature retrievals from at-ground radiance with the DRRI method due to the uncertainty in atmospheric downwelling radiance vary from -0.2 to 0.6K if the uncertainty of WV is within 50% of the actual WV; (3) Impact of the errors generated by the poor atmospheric corrections is significant, implying that a well-done atmospheric correction is indeed required to obtain accurate at-ground radiance from at-satellite radiance for successful separation of land-surface temperature and emissivity.
Hayek Lee-Ann C.
2005-01-01
Full Text Available Several analytic techniques have been used to determine sexual dimorphism in vertebrate morphological measurement data with no emergent consensus on which technique is superior. A further confounding problem for frog data is the existence of considerable measurement error. To determine dimorphism, we examine a single hypothesis (Ho = equal means for two groups (females and males. We demonstrate that frog measurement data meet assumptions for clearly defined statistical hypothesis testing with statistical linear models rather than those of exploratory multivariate techniques such as principal components, correlation or correspondence analysis. In order to distinguish biological from statistical significance of hypotheses, we propose a new protocol that incorporates measurement error and effect size. Measurement error is evaluated with a novel measurement error index. Effect size, widely used in the behavioral sciences and in meta-analysis studies in biology, proves to be the most useful single metric to evaluate whether statistically significant results are biologically meaningful. Definitions for a range of small, medium, and large effect sizes specifically for frog measurement data are provided. Examples with measurement data for species of the frog genus Leptodactylus are presented. The new protocol is recommended not only to evaluate sexual dimorphism for frog data but for any animal measurement data for which the measurement error index and observed or a priori effect sizes can be calculated.
Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit
2008-12-01
Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.
Ma, Jing; Jiang, Yijun; Tan, Liying; Yu, Siyuan; Du, Wenhe
2008-11-15
Based on weak fluctuation theory and the beam-wander model, the bit-error rate of a ground-to-satellite laser uplink communication system is analyzed, in comparison with the condition in which beam wander is not taken into account. Considering the combined effect of scintillation and beam wander, optimum divergence angle and transmitter beam radius for a communication system are researched. Numerical results show that both of them increase with the increment of total link margin and transmitted wavelength. This work can benefit the ground-to-satellite laser uplink communication system design.
Liang, Bin; Gunawan, Erry; Law, Choi Look; Teh, Kah Chan
Analytical expressions based on the Gauss-Chebyshev quadrature (GCQ) rule technique are derived to evaluate the bit-error rate (BER) for the time-hopping pulse position modulation (TH-PPM) ultra-wide band (UWB) systems under a Nakagami-m fading channel. The analyses are validated by the simulation results and adopted to assess the accuracy of the commonly used Gaussian approximation (GA) method. The influence of the fading severity on the BER performance of TH-PPM UWB system is investigated.
Benoit Macq
2008-07-01
Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.
Santillan, Arturo Orozco; Jacobsen, Finn
2010-01-01
the resulting measurement uncertainty. The purpose of this paper is to analyze the effect of the most common sources of error in sound power determination based on sound intensity measurements. In particular the influence of the scanning procedure used in approximating the surface integral of the intensity...
Error analysis for satellite gravity field determination based on two-dimensional Fourier methods
Cai, Lin; Hsu, Houtse; Gao, Fang; Zhu, Zhu; Luo, Jun
2012-01-01
The time-wise and space-wise approaches are generally applied to data processing and error analysis for satellite gravimetry missions. But both the approaches, which are based on least-squares collocation, address the whole effect of measurement errors and estimate the resolution of gravity field models mainly from a numerical point of indirect view. Moreover, requirement for higher accuracy and resolution gravity field models could make the computation more difficult, and serious numerical instabilities arise. In order to overcome the problems, this study focuses on constructing a direct relationship between power spectral density of the satellite gravimetry measurements and coefficients of the Earth's gravity potential. Based on two-dimensional Fourier transform, the relationship is analytically concluded. By taking advantage of the analytical expression, it is efficient and distinct for parameter estimation and error analysis of missions. From the relationship and the simulations, it is analytically confir...
Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas
2011-01-01
Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.
ANALYSIS OF MACROECONOMIC DETERMINANTS OF EXCHANGE RATE VOLATILITY IN INDIA
Anita Mirchandani
2013-01-01
Full Text Available The Foreign Exchange Market in India has undergone substantial changes over last decade. It is imperative by the excessive volatility of Indian Rupee causing its depreciation against major dominating currencies in international market. This research has been carried out in order to investigate various macroeconomic variables leading to acute variations in the exchange rate of a currency. An attempt has been made to review the probable reasons for the depreciation of the Rupee and analyse different macroeconomic determinants that have impact on the volatility of exchange rate and their extent of correlation with the same.
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-01
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.
Policy and Current Account Determination under Floating Exchange Rates
Hans Genberg; Alexander K. Swoboda
1989-01-01
The determinants of current account imbalances under floating exchange rates are analyzed. The analysis provides a framework within which the sources of, and the remedies for, the current account imbalances between the United States, Japan, and the Federal Republic of Germany can be discussed. The effects of various government policies are emphasized, in particular the differences between expenditure-changing and expenditure-switching policies. Short-run and long-run considerations are invest...
On the Error Rate Analysis of Dual-Hop Amplify-and-Forward Relaying in Generalized-K Fading Channels
George P. Efthymoglou
2010-01-01
Full Text Available We present novel and easy-to-evaluate expressions for the error rate performance of cooperative dual-hop relaying with maximal ratio combining operating over independent generalized- fading channels. For this system, it is hard to obtain a closed-form expression for the moment generating function (MGF of the end-to-end signal-to-noise ratio (SNR at the destination, even for the case of a single dual-hop relay link. Therefore, we employ two different upper bound approximations for the output SNR, of which one is based on the minimum SNR of the two hops for each dual-hop relay link and the other is based on the geometric mean of the SNRs of the two hops. Lower bounds for the symbol and bit error rates for a variety of digital modulations can then be evaluated using the MGF-based approach. The final expressions are useful in the performance evaluation of amplify-and-forward relaying in a generalized composite radio environment.
The type I error rate for in vivo Comet assay data when the hierarchical structure is disregarded
Hansen, Merete Kjær; Kulahci, Murat
The Comet assay is a sensitive technique for detection of DNA strand breaks. The experimental design of in vivo Comet assay studies are often hierarchically structured, which should be reWected in the statistical analysis. However, the hierarchical structure sometimes seems to be disregarded, and...... the exposition of the statistical methodology and to suitably account for the hierarchical structure of Comet assay data whenever present.......The Comet assay is a sensitive technique for detection of DNA strand breaks. The experimental design of in vivo Comet assay studies are often hierarchically structured, which should be reWected in the statistical analysis. However, the hierarchical structure sometimes seems to be disregarded......, and this imposes considerable impact on the type I error rate. This study aims to demonstrate the implications that result from disregarding the hierarchical structure. DiUerent combinations of the factor levels as they appear in a literature study give type I error rates up to 0.51 and for all combinations...
Carroll, KJ; Mielke, J; Benet, LZ; Jones, B
2016-01-01
We previously demonstrated pharmacokinetic differences among manufacturing batches of a US Food and Drug Administration (FDA)‐approved dry powder inhalation product (Advair Diskus 100/50) large enough to establish between‐batch bio‐inequivalence. Here, we provide independent confirmation of pharmacokinetic bio‐inequivalence among Advair Diskus 100/50 batches, and quantify residual and between‐batch variance component magnitudes. These variance estimates are used to consider the type I error rate of the FDA's current two‐way crossover design recommendation. When between‐batch pharmacokinetic variability is substantial, the conventional two‐way crossover design cannot accomplish the objectives of FDA's statistical bioequivalence test (i.e., cannot accurately estimate the test/reference ratio and associated confidence interval). The two‐way crossover, which ignores between‐batch pharmacokinetic variability, yields an artificially narrow confidence interval on the product comparison. The unavoidable consequence is type I error rate inflation, to ∼25%, when between‐batch pharmacokinetic variability is nonzero. This risk of a false bioequivalence conclusion is substantially higher than asserted by regulators as acceptable consumer risk (5%). PMID:27727445
Li, Mi; Li, Bowen; Zhang, Xuping; Song, Yuejiang; Liu, Jia; Tu, Guojie
2015-08-01
Space optical communication technique is attracting increasingly more attention because it owns advantages such as high security and great communication quality compared with microwave communication. As the space optical communication develops, people have already achieved the communication at data rate of Gb/s currently. The next generation for space optical system have goal of the higher data rate of 40Gb/s. However, the traditional optical communication system cannot satisfy it when the data rate of system is at such high extent. This paper will introduce ground optical communication system of 40Gb/s data rate as to achieve the space optical communication at high data rate. Speaking of the data rate of 40Gb/s, we must apply waveguide modulator to modulate the optical signal and magnify this signal by laser amplifier. Moreover, the more sensitive avalanche photodiode (APD) will be as the detector to increase the communication quality. Based on communication system above, we analyze character of communication quality in downlink of space optical communication system when data rate is at the level of 40Gb/s. The bit error rate (BER) performance, an important factor to justify communication quality, versus some parameter ratios is discussed. From results, there exists optimum ratio of gain factor and divergence angle, which shows the best BER performance. We can also increase ratio of receiving diameter and divergence angle for better communication quality. These results can be helpful to comprehend the character of optical communication system at high data rate and contribute to the system design.
An Analysis of Romania's short-run sovereign rating determinants
Emilian-Constantin Miricescu
2012-12-01
Full Text Available For most European Union countries the government expenditure exceeds government revenue which could lead in the long run to an increase in the government debt to GDP ratio. Considering the distortions generated by the financial and economic crisis, followed by the debt crisis, both local and international investors are more prudent when planning in lending money to sovereigns. The sovereign rating is probably one of the most important aspects which investors carefully analyze before they decide to purchase government bonds or Treasury bills. This paper focuses on Romania’s short-run sovereign rating determinants according to the specific methodology of Romania’s Export-Import Bank (EximBank. The results reveal that rating is Bb – payment difficulties and insignificant losses being possible.
1986-03-01
RATXrE FiMgure 7. monetar Error Pate in Plation to the ’,u’ber of Additional Duties of the Personnel Officer. a൹ no relationship appears to exist...S x z zz z 00- - - .1 0 5 10 15 20 _25 30 35 40 MONETAR !Y ERROR R11ATE Figure 12. Monetary Error Rate in --,lation to the Number of MOS 0131
Determination of Tensile Properties of Polymers at High Strain Rates
Major Z.
2010-06-01
Full Text Available In the field of high rate testing of polymers the measured properties are highly dependent on the applied methodology. Hence, the test setup as whole but in particular also the geometrical type of specimen plays a decisive role. The widely used standard for the determination of tensile properties of polymers (ISO527-2 was extended by a novel standard (ISO18872:2007, which is targeted on the determination of tensile properties at high strain rates. In this standard also a novel specimen shape is proposed. Hand in hand with the introduction of new specimen geometry the question of comparability arises. To point out the differences in stress-strain response of the ISO18872 specimen and the ISO527-2 multipurpose specimen tensile tests over a wide loading rate range were conducted in this paper. A digital image correlation system in combination with a high speed camera was used to characterize the local material behaviour. Different parameters like nominal stress, true stress, nominal strain, true strain as well as volumetric strain were determined and used to compare the two specimen geometries.
Sigurdardottir, Dorotea H.; Stearns, Jett; Glisic, Branko
2017-07-01
The deformed shape is a consequence of loading the structure and it is defined by the shape of the centroid line of the beam after deformation. The deformed shape is a universal parameter of beam-like structures. It is correlated with the curvature of the cross-section; therefore, any unusual behavior that affects the curvature is reflected through the deformed shape. Excessive deformations cause user discomfort, damage to adjacent structural members, and may ultimately lead to issues in structural safety. However, direct long-term monitoring of the deformed shape in real-life settings is challenging, and an alternative is indirect determination of the deformed shape based on curvature monitoring. The challenge of the latter is an accurate evaluation of error in the deformed shape determination, which is directly correlated with the number of sensors needed to achieve the desired accuracy. The aim of this paper is to study the deformed shape evaluated by numerical double integration of the monitored curvature distribution along the beam, and create a method to predict the associated errors and suggest the number of sensors needed to achieve the desired accuracy. The error due to the accuracy in the curvature measurement is evaluated within the scope of this work. Additionally, the error due to the numerical integration is evaluated. This error depends on the load case (i.e., the shape of the curvature diagram), the magnitude of curvature, and the density of the sensor network. The method is tested on a laboratory specimen and a real structure. In a laboratory setting, the double integration is in excellent agreement with the beam theory solution which was within the predicted error limits of the numerical integration. Consistent results are also achieved on a real structure—Streicker Bridge on Princeton University campus.
Covariance Analysis of Orbit-determination Error Components for Lunar Probe%月球探测器定轨误差分量协方差分析
樊敏; 董光亮; 郝万宏; 王宏
2012-01-01
基于测量量的数学模型，推导环月探测器状态矢量的信息阵，建立定轨误差RTN分量的误差方程和协方差矩阵，给出了测距、测速、时延和时延率的测量误差对定轨误差RTN分量影响的数值关系。根据中国探月工程实际轨道测量数据精度和测站／基线分布情况，计算分析了2种环月轨道位置速度误差RTN分量的影响因素和误差水平。利用嫦娥-1、2月球探测器实际定轨结果，验证了分析方法的有效性。该方法对中国探月工程二期任务动力下降初始定轨误差RTN分量计算具有参考意义。%Based on measurement models of the tracking data including ranging, range rate, inter- ferometric delay and delay rate, the information array of lunar probe's state vector is presented. Error equations and covariance matrixes for RTN components of orbit-determination error are de- rived. The numerical relation between RTN components of error and the measurement accuracy are calculated for lunar orbit. According to measurement error and station/baseline distribution in the China lunar exploration program, the influence and level of error factors are analyzed and calcu- lated for RTN components of errors in the position and velocity for various lunar orbits. The or- bit-determination results using measurement data of Chang'e-1 and Chang'e-2 probes validate this analysis method, which will be as an important reference for RTN components of errors in the lunar decent initial orbit in the second phase of China lunar exploration program.
Determinants of Effective Tax Rate of Companies in Latin America
Camila Freitas Sant’Ana
2015-12-01
Full Text Available The objective of this study was to identify the determining factors of the effective tax rate (ETR of companies of Latin America in the period 2009 to 2013. Descriptive study was conduct through documentary research, with a quantitative approach of the data. The sample was made up of 500 companies, being 45 of Argentina, 171 of Brazil, 108 of Chile, 38 of Colombia, and Mexico's 71 67 of Peru. Whose data were collected in the database of Thompson Reuters ® and analyzed by means of panel data regression through the software STATA ®, having as dependent variable the Effective tax rate (ETR and independent variables size (TAM capital intensity (INTCAP, the intensity of inventory (INTINV, leverage (ALAV and profitability on assets (ROA. The results show that the size of companies positively influence presents significant about the ETR of Colombian companies. Capital intensity (INTCAP and the intensity of inventory (INTINV were not meaningful to determine the influence of the ETR in the countries analyzed. The leverage of the Argentine companies reveals a positive influence on the ETR, while for the Colombian companies this influence was significant not significant and negatively to the other countries. Profitability resulted in a negative influence to the Mexican and Peruvian companies, and not significant for the other countries to determine the influence on ETR. Denotes that there are differences regarding the determinants of tax burden in Latin American countries, which encourages further studies
Econometric Analysis of Determinants of Real Effective Exchange Rate in Nigeria (1960-2015
Ibrahim Waheed
2016-06-01
Full Text Available This study investigates the determinants of real effective exchange rate in Nigeria for the period between 1960 and 2015 using the vector error correction mechanism to separate long run from the short run fundamentals. The findings from the regression estimates revealed that; terms of trade, openness of the economy, net capital inflow and total government expenditure were the major long run determinants of real effective exchange rate in the country while variables such as; broad money supply (M2, nominal effective exchange rate, structural adjustment program dummy, June 12 crisis and change to civil rule dummies were revealed as the major short run determinants of exchange rate in Nigeria between 1960 and 2015. The study concludes by recommending that since the major variable of terms of trade (crude oil price is out of the government control, the effect of shocks due to the fluctuations of crude oil price can be minimized by shifting the economy from a mono-product nation and diversify the economy to increase productive capacity. Also, the change to civil rule dummy used in the study revealed that the system has not been friendly with the country’s real effective exchange rate, thus needing to review the system and bringing out all negative activities there in to ensure Nigeria’s currency appreciation. Guided openness is also suggested to avert the danger that unguided trade liberalization may bring into the country.
Johanna I Westbrook
2012-01-01
Full Text Available BACKGROUND: Considerable investments are being made in commercial electronic prescribing systems (e-prescribing in many countries. Few studies have measured or evaluated their effectiveness at reducing prescribing error rates, and interactions between system design and errors are not well understood, despite increasing concerns regarding new errors associated with system use. This study evaluated the effectiveness of two commercial e-prescribing systems in reducing prescribing error rates and their propensities for introducing new types of error. METHODS AND RESULTS: We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders and clinical (e.g., wrong dose, wrong drug errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious by hospital and study period; and rates and categories of postintervention "system-related" errors (where system functionality or design contributed to the error were calculated. Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards (respectively reductions of 66.1% [95% CI 53.9%-78.3%]; 57.5% [33.8%-81.2%]; and 60.5% [48.5%-72.4%]. The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission (95% CI 5.23-7.28 to 2.12 (95% CI 1.71-2.54; p<0.0001 and at Hospital B from 3.62 (95% CI 3.30-3.93 to 1.46 (95% CI 1.20-1.73; p<0.0001. This
Assessment of error rates in acoustic monitoring with the R package monitoR
Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese
2016-01-01
Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were for song event detection.
Geng, Longwu; Jiang, Haifeng; Tong, Guangxiang; Xu, Wei
2016-05-01
Knowledge of oxygen consumption rates and asphyxiation points in fish is important to determine appropriate stocking and water quality management in aquaculture. The oxygen consumption rate and asphyxiation point in Chanodichthys mongolicus were detected under laboratory conditions using an improved respirometer chamber. The results revealed that more accurate estimates can be obtained by adjusting the volume of the respirometer chamber, which may avoid system errors caused by either repeatedly adjusting fish density or selecting different equipment specifications. The oxygen consumption rate and asphyxiation point of C. mongolicus increased with increasing water temperature and decreasing fish size. Changes in the C. mongolicus oxygen consumption rate were divided into three stages at water temperatures of 11-33°C: (1) a low temperature oxygen consumption rate stage when water temperature was 11-19°C, (2) the optimum temperature oxygen consumption rate stage when water temperature was 19-23°C, and (3) a high temperature oxygen consumption rate stage when water temperature was > 27°C. The temperature quotients (Q10) obtained suggested that C. mongolicus preferred a temperature range of 19-23°C. At 19°C, C. mongolicus exhibited higher oxygen consumption rates during the day when the maximum values were observed at 10:00 and 14:00 than at night when the minimum occurred at 02:00.
Lya Aklimawati
2013-12-01
Full Text Available High volatility cocoa price movement is consequenced by imbalancing between power demand and power supply in commodity market. World economy expectation and market liberalization would lead to instability on cocoa prices in the international commerce. Dynamic prices moving erratically influence the benefit of market players, particularly producers. The aim of this research is (1 to estimate the empirical cocoa prices model for responding market dynamics and (2 analyze short-term and long-term effect of price determinants variables on cocoa prices. This research was carried out by analyzing annualdata from 1980 to 2011, based on secondary data. Error correction mechanism (ECM approach was used to estimate the econometric model of cocoa price.The estimation results indicated that cocoa price was significantly affected by exchange rate IDR-USD, world gross domestic product, world inflation, worldcocoa production, world cocoa consumption, world cocoa stock and Robusta prices at varied significance level from 1 - 10%. All of these variables have a long run equilibrium relationship. In long run effect, world gross domestic product, world cocoa consumption and world cocoa stock were elastic (E >1, while other variables were inelastic (E <1. Variables that affecting cocoa pricesin short run equilibrium were exchange rate IDR-USD, world gross domestic product, world inflation, world cocoa consumption and world cocoa stock. The analysis results showed that world gross domestic product, world cocoa consumption and world cocoa stock were elastic (E >1 to cocoa prices in short-term. Whereas, the response of cocoa prices was inelastic to change of exchange rate IDR-USD and world inflation.Key words: Price
Kopacka, I; Hofrichter, J; Fuchs, K
2013-05-01
Sampling strategies to substantiate freedom from disease are important when it comes to the trade of animals and animal products. When considering imperfect tests and finite populations, sample size calculation can, however, be a challenging task. The generalized hypergeometric formula developed by Cameron and Baldock (1998a) offers a framework that can elegantly be extended to multi-stage sampling strategies, which are widely used to account for disease clustering at herd-level. The achieved alpha-error of such surveys, however, typically depends on the realization of the sample and can differ from the pre-calculated value. In this paper, we introduce a new formula to evaluate the exact alpha-error induced by a specific sample. We further give a numerically viable approximation formula and analyze its properties using a data example of Brucella melitensis in the Austrian sheep population.
Zheng, Quan; Han, Zhigang; Chen, Lei
2016-09-01
The spectroscopic phase shifting method was proposed to determine the misalignment error of a compound zero-order waveplate. The waveplate, which is composed of two separate multi-order quartz waveplates, was measured by a polarizer-waveplate-analyser setup with a spectrometer as the detector. The theoretical relationship between the misalignment error and the azimuth of the polarized light that emerged from the waveplate was studied by comparing two forms of the Jones matrix of the waveplate. Four spectra were obtained to determine the wavelength-dependent azimuth using a phase shifting algorithm when the waveplate was rotated to four detection angles. The misalignment error was ultimately solved from the wavelength-dependent azimuth by the Levenberg-Marquardt method. Experiments were conducted at six misalignment angles. The measured results of the misalignment angle agree well with their nominal values, indicating that the spectroscopic phase shifting method can be a reliable way to measure the misalignment error of a compound zero-order waveplate.
Net Assimilation Rate Determines the Growth Rates of 14 Species of Subtropical Forest Trees.
Xuefei Li
Full Text Available Growth rates are of fundamental importance for plants, as individual size affects myriad ecological processes. We determined the factors that generate variation in RGR among 14 species of trees and shrubs that are abundant in subtropical Chinese forests. We grew seedlings for two years at four light levels in a shade-house experiment. We monitored the growth of every juvenile plant every two weeks. After one and two years, we destructively harvested individuals and measured their functional traits and gas-exchange rates. After calculating individual biomass trajectories, we estimated relative growth rates using nonlinear growth functions. We decomposed the variance in log(RGR to evaluate the relationships of RGR with its components: specific leaf area (SLA, net assimilation rate (NAR and leaf mass ratio (LMR. We found that variation in NAR was the primary determinant of variation in RGR at all light levels, whereas SLA and LMR made smaller contributions. Furthermore, NAR was strongly and positively associated with area-based photosynthetic rate and leaf nitrogen content. Photosynthetic rate and leaf nitrogen concentration can, therefore, be good predictors of growth in woody species.
Error correction and diversity analysis of population mixtures determined by NGS.
Wood, Graham R; Burroughs, Nigel J; Evans, David J; Ryabov, Eugene V
2014-01-01
The impetus for this work was the need to analyse nucleotide diversity in a viral mix taken from honeybees. The paper has two findings. First, a method for correction of next generation sequencing error in the distribution of nucleotides at a site is developed. Second, a package of methods for assessment of nucleotide diversity is assembled. The error correction method is statistically based and works at the level of the nucleotide distribution rather than the level of individual nucleotides. The method relies on an error model and a sample of known viral genotypes that is used for model calibration. A compendium of existing and new diversity analysis tools is also presented, allowing hypotheses about diversity and mean diversity to be tested and associated confidence intervals to be calculated. The methods are illustrated using honeybee viral samples. Software in both Excel and Matlab and a guide are available at http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/, the Warwick University Systems Biology Centre software download site.
Topographic determination of corneal asphericity as a function of age, gender, and refractive error.
Yazdani, Negareh; Shahkarami, Leila; OstadiMoghaddam, Hadi; Ehsaei, Asieh
2017-08-01
The aim of the study is to evaluate corneal asphericity in three diameters of 5, 6, and 7 mm, and to assess the effect of age, refractive error, and gender on asphericity. The study included 500 healthy subjects with a mean ± SD age of 29.51 ± 11.53 years. All analyses were based on the right eyes of the patients. Topographic data were analyzed using Oculus Keratograph 4. Mean ± SD corneal asphericity values of the study population in 5, 6, and 7 mm diameters were -0.21 ± 0.11, -0.24 ± 0.10, and -0.27 ± 0.11, respectively. The anterior corneal surface asphericity showed no correlation with either age, gender, or refractive error. The corneal asphericity shows a tendency for an increase with diameter and asphericity does not have a significant correlation with any factors of age, gender, and refractive error.
Analysis of Errors of Deep Space X-Band Range-Rate Measurement%深空X频段测速数据误差分析
樊敏; 王宏; 李海涛; 赵华
2013-01-01
X-band is the primary frequency band used by deep space TT&C (Tracking, Telemetry and Command) systems. X-band range-rate measurement is more accurate than those of S-band as validated in X-band deep space TT&C system experiments of Chang'E-2 spacecraft. The precision of range-rate measurement is about 1 mm/s. For X-band range-rate, theoretical error caused by Doppler effect approximate calculation formula is analyzed. This error could become 1 cm/s during translunar and lunar-orbiting phases. Furthermore, measurement residual error is analyzed based on the precision ephemerides of post orbit determination for X-band deep space TT&C system experiment of Chang'E-2 spacecraft. The results show that the range-rate residual error induced by the approximation increases by 1 mm/s compared to what is calculated by equations. It is close to the actual measurement precision. Therefore, the Doppler effect approximate calculation formula is no longer applicable and the exact formula should be used in the lunar and deep space exploration projects in the future.%X频段是深空测控的主用频段,其多普勒测速精度远高于S频段,这一结论在“嫦娥二号”任务X频段深空测控技术试验中得到了验证,测速精度约为1 mm/s.针对X频段高精度测速,本文分析了目前采用的径向速度近似计算公式,理论分析其产生的误差在地月转移和环月轨道段可达1 cm/s.通过“嫦娥二号”任务X频段测控技术试验,以事后精密轨道为基准进行残差分析,结果表明,相比精确公式,近似公式计算测速数据的残差会增加1 mm/s,已与X频段测速精度本身相当,因此,多普勒测速近似计算在X频段测量中已不再适用,应使用本文中列出的精确计算公式.
A rating system for determination of hazardous wastes.
Talinli, Ilhan; Yamantürk, Rana; Aydin, Egemen; Başakçilardan-Kabakçi, Sibel
2005-11-11
Although hazardous waste lists and their classification methodologies are nearly the same in most of the countries, there are some gaps and subjectiveness in determining the waste as hazardous waste. A rating system for the determination of waste as a hazardous waste is presented in this study which aims to overcome the problems resulted from the existing methodologies. Overall rating value (ORV) calculates and quantifies the waste as regular, non-regular or hazardous waste in an "hourglass" scale. "ORV" as a cumulative-linear formulation in proposed model consists of components such as ecological effects of the waste (Ee) in terms of four main hazard criteria: ignitability, reactivity, corrosivity and toxicity; combined potential risk (CPR) including carcinogenic effect, toxic, infectious and persistence characteristics; existing lists and their methodology (L) and decision factor (D) to separate regular and non-regular waste. Physical form (f) and quantity (Q) of the waste are considered as factors of these components. Seventeen waste samples from different sources are evaluated to demonstrate the simulation of the proposed model by using "hourglass" scale. The major benefit of the presented rating system is to ease the works of decision makers in managing the wastes.
Determining drug release rates of hydrophobic compounds from nanocarriers.
D'Addio, Suzanne M; Bukari, Abdallah A; Dawoud, Mohammed; Bunjes, Heike; Rinaldi, Carlos; Prud'homme, Robert K
2016-07-28
Obtaining meaningful drug release profiles for drug formulations is essential prior to in vivo testing and for ensuring consistent quality. The release kinetics of hydrophobic drugs from nanocarriers (NCs) are not well understood because the standard protocols for maintaining sink conditions and sampling are not valid owing to mass transfer and solubility limitations. In this work, a new in vitroassay protocol based on 'lipid sinks' and magnetic separation produces release conditions that mimic the concentrations of lipid membranes and lipoproteins in vivo, facilitates separation, and thus allows determination of intrinsic release rates of drugs from NCs. The assay protocol is validated by (i) determining the magnetic separation efficiency, (ii) demonstrating that sink condition requirements are met, and (iii) accounting for drug by completing a mass balance. NCs of itraconazole and cyclosporine A (CsA) were prepared and the drug release profiles were determined. This release protocol has been used to compare the drug release from a polymer stabilized NC of CsA to a solid drug NP of CsA alone. These data have led to the finding that stabilizing block copolymer layers have a retarding effect on drug release from NCs, reducing the rate of CsA release fourfold compared with the nanoparticle without a polymer coating.This article is part of the themed issue 'Soft interfacial materials: from fundamentals to formulation'.
Variation in human recombination rates and its genetic determinants.
Adi Fledel-Alon
Full Text Available BACKGROUND: Despite the fundamental role of crossing-over in the pairing and segregation of chromosomes during human meiosis, the rates and placements of events vary markedly among individuals. Characterizing this variation and identifying its determinants are essential steps in our understanding of the human recombination process and its evolution. STUDY DESIGN/RESULTS: Using three large sets of European-American pedigrees, we examined variation in five recombination phenotypes that capture distinct aspects of crossing-over patterns. We found that the mean recombination rate in males and females and the historical hotspot usage are significantly heritable and are uncorrelated with one another. We then conducted a genome-wide association study in order to identify loci that influence them. We replicated associations of RNF212 with the mean rate in males and in females as well as the association of Inversion 17q21.31 with the female mean rate. We also replicated the association of PRDM9 with historical hotspot usage, finding that it explains most of the genetic variance in this phenotype. In addition, we identified a set of new candidate regions for further validation. SIGNIFICANCE: These findings suggest that variation at broad and fine scales is largely separable and that, beyond three known loci, there is no evidence for common variation with large effects on recombination phenotypes.
Determination of human muscle protein fractional synthesis rate
Bornø, Andreas; Hulston, Carl J; van Hall, Gerrit
2014-01-01
In the present study, different MS methods for the determination of human muscle protein fractional synthesis rate (FSR) using [ring-(13)C6 ]phenylalanine as a tracer were evaluated. Because the turnover rate of human skeletal muscle is slow, only minute quantities of the stable isotopically......-MS/MS) and GC-tandem MS (GC-MS/MS) have made these techniques an option for human muscle FSR measurements. Human muscle biopsies were freeze dried, cleaned, and hydrolyzed, and the amino acids derivatized using either N-acetyl-n-propyl, phenylisothiocyanate, or N.......89 ± 0.01, P muscle FSR, (2) LC-MS/MS comes quite close and is a good alternative when tissue quantities are too small for GC-C-IRMS, and (3) If GC-MS/MS is to be used, then the HFBA derivative should be used instead...
Donald W. Zimmerman
2004-01-01
Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.
A low error reconstruction method for confocal holography to determine 3-dimensional properties
Jacquemin, P.B., E-mail: pbjacque@nps.edu [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada); Herring, R.A. [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada)
2012-06-15
A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as 'wily'. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: Black-Right-Pointing-Pointer Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. Black-Right-Pointing-Pointer Processing of multiple holograms containing the cumulative refractive index through the fluid. Black-Right-Pointing-Pointer Reconstruction issues due to restricting angular scanning to the numerical aperture of the
E. V. Titovich
2016-01-01
Full Text Available To ensure the radiation protection of oncology patients is needed to provide the constancy of functional characteristics of the medical linear accelerators, which affect the accuracy of dose delivery. For this purpose, their quality control procedures are realized including calibration of radiation output of the linac, so the error in determining the dose reference value during this procedure must not exceed 2 %. The aim is to develop a methodology for determining the error in determining this value, depending on the characteristics of the radiation beam. Dosimetric measurements of Trilogy S/N 3567 linac dose distributions have been carried out for achievement of the objectives, on the basis of which dose errors depending on the dose rate value, the accuracy of the beam quality and output factors determination, the symmetry and uniformity of the radiation field, the angular dependence of the linac radiation output were obtained. It was found that the greatest impact on the value of the error has the error in the output factors determination (up to 5.26 % for both photon energy. Dose errors caused by changing dose rate during treatment were different for two photon energies, and reached 1.6 % for 6 MeV and 1.4 % for 18 MeV. Dose errors caused by inaccuracies of the beam quality determination were different for two photon energies, and reached 1.1 % for 18 MeV and –0.3 % for 6 MeV. Errors caused by the remaining of the characteristic do not exceed 1 %. Thus, there is a possibility to express the results of periodic quality control of the linear accelerator in terms of dose and use them to conduct a comprehensive assessment of the possibility of clinical use of a linear accelerator for oncology patients irradiation on the basis of the calibration of radiation output.
Smith, D. R.; Leslie, F. W.
1984-01-01
The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a successive correction type scheme for the analysis of surface meteorological data. The scheme is subjected to a series of experiments to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple pass technique increases the accuracy of the analysis. Furthermore, the tests suggest appropriate values for the analysis parameters in resolving disturbances for the data set used in this investigation.
Cox, Christina B.; Coney, Thom A.
1999-01-01
The Advanced Communications Technology Satellite (ACTS) communications system operates at Ka band. ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of an analysis characterizing the improvement in VSAT performance provided by this protocol. The metric for performance is VSAT bit error rate (BER) availability. The acceptable availability defined by communication system design specifications is 99.5% for a BER of 5E-7 or better. VSAT BER availabilities with and without rain fade compensation are presented. A comparison shows the improvement in BER availability realized with rain fade compensation. Results are presented for an eight-month period and for 24 months spread over a three-year period. The two time periods represent two different configurations of the fade compensation protocol. Index Terms-Adaptive coding, attenuation, propagation, rain, satellite communication, satellites.
Flatt, Andrew A; Esco, Michael R
2013-12-18
The purpose of this investigation was to cross-validate the ithlete™ heart rate variability smart phone application with an electrocardiograph for determining ultra-short-term root mean square of successive R-R intervals. The root mean square of successive R-R intervals was simultaneously determined via electrocardiograph and ithlete™ at rest in twenty five healthy participants. There were no significant differences between the electrocardiograph and ithlete™ derived root mean square of successive R-R interval values (p > 0.05) and the correlation was near perfect (r = 0.99, p < 0.001). In addition, the ithlete™ revealed a Standard Error of the Estimate of 1.47 and Bland Altman plot showed that the limits of agreement ranged from 2.57 below to 2.63 above the constant error of -0.03. In conclusion, the ithlete™ appeared to provide a suitably accurate measure of root mean square of successive R-R intervals when compared to the electrocardiograph measures obtained in the laboratory within the current sample of healthy adult participants. The current study lays groundwork for future research determining the efficacy of ithlete™ for reflecting athletic training status over a chronic conditioning period.
Santillan, Arturo Orozco; Jacobsen, Finn
2010-01-01
Sound intensity measurements make it possible to determine the sound power of a source in situ, even in the presence of other sources. However, the very fact that intensity-based sound power measurements can take place under widely different conditions makes it extremely difficult to evaluate...... the resulting measurement uncertainty. The purpose of this paper is to analyze the effect of the most common sources of error in sound power determination based on sound intensity measurements. In particular the influence of the scanning procedure used in approximating the surface integral of the intensity...
Determination of VOC emission rates and compositions for offset printing.
Wadden, R A; Scheff, P A; Franke, J E; Conroy, L M; Keil, C B
1995-07-01
The release rates of volatile organic compounds (VOC) as fugitive emissions from offset printing are difficult to quantify, and the compositions are usually not known. Tests were conducted at three offset printing shops that varied in size and by process. In each case, the building shell served as the test "enclosure," and air flow and concentration measurements were made at each air entry and exit point. Emission rates and VOC composition were determined during production for (1) a small shop containing three sheetfed presses and two spirit duplicators (36,700 sheets, 47,240 envelopes and letterheads), (2) a medium-size industrial in-house shop with two webfed and three sheetfed presses, and one spirit duplicator (315,130 total sheets), and (3) one print room of a large commercial concern containing three webfed, heatset operations (1.16 x 10(6) ft) served by catalytic air pollution control devices. Each test consisted of 12 one-hour periods over two days. Air samples were collected simultaneously during each period at 7-14 specified locations within each space. The samples were analyzed by gas chromatography (GC) for total VOC and for 13-19 individual organics. Samples of solvents used at each shop were also analyzed by GC. Average VOC emission rates were 4.7-6.1 kg/day for the small sheetfed printing shop, 0.4-0.9 kg/day for the industrial shop, and 79-82 kg/day for the commercial print room. Emission compositions were similar and included benzene, toluene, xylenes, ethylbenzene, and hexane. Comparison of the emission rates with mass balance estimates based on solvent usage and composition were quite consistent.(ABSTRACT TRUNCATED AT 250 WORDS)
Hindasageri, V; Vedula, R P; Prabhu, S V
2013-02-01
Temperature measurement by thermocouples is prone to errors due to conduction and radiation losses and therefore has to be corrected for precise measurement. The temperature dependent emissivity of the thermocouple wires is measured by the use of thermal infrared camera. The measured emissivities are found to be 20%-40% lower than the theoretical values predicted from theory of electromagnetism. A transient technique is employed for finding the heat transfer coefficients for the lead wire and the bead of the thermocouple. This method does not require the data of thermal properties and velocity of the burnt gases. The heat transfer coefficients obtained from the present method have an average deviation of 20% from the available heat transfer correlations in literature for non-reacting convective flow over cylinders and spheres. The parametric study of thermocouple error using the numerical code confirmed the existence of a minimum wire length beyond which the conduction loss is a constant minimal. Temperature of premixed methane-air flames stabilised on 16 mm diameter tube burner is measured by three B-type thermocouples of wire diameters: 0.15 mm, 0.30 mm, and 0.60 mm. The measurements are made at three distances from the burner tip (thermocouple tip to burner tip/burner diameter = 2, 4, and 6) at an equivalence ratio of 1 for the tube Reynolds number varying from 1000 to 2200. These measured flame temperatures are corrected by the present numerical procedure, the multi-element method, and the extrapolation method. The flame temperatures estimated by the two-element method and extrapolation method deviate from numerical results within 2.5% and 4%, respectively.
Hindasageri, V.; Vedula, R. P.; Prabhu, S. V.
2013-02-01
Temperature measurement by thermocouples is prone to errors due to conduction and radiation losses and therefore has to be corrected for precise measurement. The temperature dependent emissivity of the thermocouple wires is measured by the use of thermal infrared camera. The measured emissivities are found to be 20%-40% lower than the theoretical values predicted from theory of electromagnetism. A transient technique is employed for finding the heat transfer coefficients for the lead wire and the bead of the thermocouple. This method does not require the data of thermal properties and velocity of the burnt gases. The heat transfer coefficients obtained from the present method have an average deviation of 20% from the available heat transfer correlations in literature for non-reacting convective flow over cylinders and spheres. The parametric study of thermocouple error using the numerical code confirmed the existence of a minimum wire length beyond which the conduction loss is a constant minimal. Temperature of premixed methane-air flames stabilised on 16 mm diameter tube burner is measured by three B-type thermocouples of wire diameters: 0.15 mm, 0.30 mm, and 0.60 mm. The measurements are made at three distances from the burner tip (thermocouple tip to burner tip/burner diameter = 2, 4, and 6) at an equivalence ratio of 1 for the tube Reynolds number varying from 1000 to 2200. These measured flame temperatures are corrected by the present numerical procedure, the multi-element method, and the extrapolation method. The flame temperatures estimated by the two-element method and extrapolation method deviate from numerical results within 2.5% and 4%, respectively.
Claude D'Amours
2011-01-01
Full Text Available We analytically derive the upper bound for the bit error rate (BER performance of a single user multiple input multiple output code division multiple access (MIMO-CDMA system employing parity-bit-selected spreading in slowly varying, flat Rayleigh fading. The analysis is done for spatially uncorrelated links. The analysis presented demonstrates that parity-bit-selected spreading provides an asymptotic gain of 10log(Nt dB over conventional MIMO-CDMA when the receiver has perfect channel estimates. This analytical result concurs with previous works where the (BER is determined by simulation methods and provides insight into why the different techniques provide improvement over conventional MIMO-CDMA systems.
Takeuchi, Naoki; Suzuki, Hideo; Yoshikawa, Nobuyuki
2017-05-01
Adiabatic quantum-flux-parametron (AQFP) is an energy-efficient superconductor logic. The advantage of AQFP is that the switching energy can be reduced by lowering operation frequencies or by increasing the quality factors of Josephson junctions, while keeping the energy barrier height much larger than thermal energy. In other words, both low energy dissipation and low bit error rates (BERs) can be achieved. In this paper, we report the first measurement results of the low BERs of AQFP logic. We used a superconductor voltage driver with a stack of dc superconducting-quantum-interference-devices to amplify the logic signals of AQFP gates into mV-range voltage signals for the BER measurement. Our measurement results showed 3.3 dB and 2.6 dB operation margins, in which BERs were less than 10-20, for 1 Gbps and 2 Gbps data rates, respectively. While the observed BERs were very low, the estimated switching energy for the 1-Gbps operation was only 2 zJ or 30kBT, where kB is the Boltzmann's constant and T is the temperature. Unlike conventional non-adiabatic logic, BERs are not directly associated with switching energy in AQFP.
Li, Jia Wen; Chen, Xi Mei; Pun, Sio Hang; Mak, Peng Un; Gao, Yue Ming; Vai, Mang I; Du, Min
2013-01-01
Bit error rate (BER), which indicates the reliability of communicate channel, is one of the most important values in all kinds of communication system, including intra-body communication (IBC). In order to know more about IBC channel, this paper presents a new method of BER estimation for galvanic-type IBC using experimental eye-diagram and jitter characteristics. To lay the foundation for our methodology, the fundamental relationships between eye-diagram, jitter and BER are first reviewed. Then experiments based on human lower arm IBC are carried out using quadrature phase shift keying (QPSK) modulation scheme and 500 KHz carries frequency. In our IBC experiments, the symbol rate is from 10 Ksps to 100 Ksps, with two transmitted power settings, 0 dBm and -5 dBm. Finally, the BER results were obtained after calculation by experimental data through the relationships among eye-diagram, jitter and BER. These results are then compared with theoretical values and they show good agreement, especially when SNR is between 6 dB to 11 dB. Additionally, these results demonstrate assuming the noise of galvanic-type IBC channel as Additive White Gaussian Noise (AWGN) in previous study is applicable.
James Osuru Mark
2011-01-01
Full Text Available The multicarrier code division multiple access (MC-CDMA system has received a considerable attention from researchers owing to its great potential in achieving high data rates transmission in wireless communications. Due to the detrimental effects of multipath fading the performance of the system degrades. Similarly, the impact of non-orthogonality of spreading codes can exist and cause interference. This paper addresses the performance of multicarrier code division multiple access system under the influence of frequency selective generalized η-µ fading channel and multiple access interference caused by other active users to the desired one. We apply Gaussian approximation technique to analyse the performance of the system. The avearge bit error rate is derived and expressed in Gauss hypergeometic functions. Maximal ratio combining diversity technique is utilized to alleviate the deleterious effect of multipath fading. We observed that the system performance improves when the parameter η increase or decreasse in format 1 or format 2 conditions respectively.
Borot de Battisti, M; Denis de Senneville, B; Maenhout, M; Hautvast, G; Binnekamp, D; Lagendijk, J J W; van Vulpen, M; Moerland, M A
2016-03-01
The development of magnetic resonance (MR) guided high dose rate (HDR) brachytherapy for prostate cancer has gained increasing interest for delivering a high tumor dose safely in a single fraction. To support needle placement in the limited workspace inside the closed-bore MRI, a single-needle MR-compatible robot is currently under development at the University Medical Center Utrecht (UMCU). This robotic device taps the needle in a divergent way from a single rotation point into the prostate. With this setup, it is warranted to deliver the irradiation dose by successive insertions of the needle. Although robot-assisted needle placement is expected to be more accurate than manual template-guided insertion, needle positioning errors may occur and are likely to modify the pre-planned dose distribution.In this paper, we propose a dose plan adaptation strategy for HDR prostate brachytherapy with feedback on the needle position: a dose plan is made at the beginning of the interventional procedure and updated after each needle insertion in order to compensate for possible needle positioning errors. The introduced procedure can be used with the single needle MR-compatible robot developed at the UMCU. The proposed feedback strategy was tested by simulating complete HDR procedures with and without feedback on eight patients with different numbers of needle insertions (varying from 4 to 12). In of the cases tested, the number of clinically acceptable plans obtained at the end of the procedure was larger with feedback compared to the situation without feedback. Furthermore, the computation time of the feedback between each insertion was below 100 s which makes it eligible for intra-operative use.
Philip J Kellman
Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert
Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E
2014-01-01
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and
Determination of Rate of Hearing Changes After Spinal Anesthesia
F Omidi
2008-07-01
Full Text Available Introduction: Hearing loss after surgery is reported rarely. Its prevalence rate is different and reported to be between 3-92%. Hearing loss is often subclinical and not diagnosed without audiometry. The aim of this study was to determine rate of hearing changes after spinal anesthesia in patients undergoing surgery with spinal anesthesia. Methods: In this descriptive study, forty male patients scheduled for repair of inguinal hernia under spinal anesthesia were selected by simple sampling method. Before surgery, audiometry was performed for both the ears of the patients. Audiomatery was performed again by the audiometry specialist on day one, five, fifteen and two months after surgery. Results: Hearing loss was observed in 13 (32.5% patients. Hearing loss in 12 patients (92% was in low hearing frequency range and 1 patient (8% was in mid hearing frequency. Hearing loss in 8 patients (61% was ipsilateral and in 5 patients (39% was bilateral. Hearing loss in 9 patients (69% on 5th day and 2 patients (5/15% on 15th day resolved spontaneously. Conclusion: Results of this study conformed that hearing loss after spinal anesthesia is not a serious problem and can resolve spontaneously. It seems that there is relationship between hearing loss and headache.
Abdel-Latif A. Seoud
2010-01-01
Full Text Available Problem statement: For chemical reactions, the determination of the rate constants is both very difficult and a time consuming process. The aim of this research was to develop computer programs for determining the rate constants for the general form of any complex reaction at a certain temperature. The development of such program can be very helpful in the control of industrial processes as well as in the study of the reaction mechanisms. Determination of the accurate values of the rate constants would help in establishing the optimum conditions of reactor design including pressure, temperature and other parameters of the chemical reaction. Approach: From the experimental concentration-time data, initial values of rate constants were calculated. Experimental data encountered several types of errors, including temperature variation, impurities in the reactants and human errors. Simulations of a second order consecutive irreversible chemical reaction of the saponification of diethyl ester were presented as an example of the complex reactions. The rate equations (system of simultaneous differential equations of the reaction were solved to get the analytical concentration versus time profiles. The simulation results were compared with experimental results at each measured point. All deviations between experimental and calculated values were squared and summed up to form a new function. This function was fed into a minimizer routine that gave the optimal rate constants. Two optimization techniques were developed using FORTRAN and MATLAB for accurately determining the rate constants of the reaction at certain temperature from the experimental data. Results: Results showed that the two proposed programs were very efficient, fast and accurate tools to determine the true rate constants of the reaction with less 1% error. The use of the MATLAB embedded subroutines for simultaneously solving the differential equations and minimization of the error function
[Determination of Hard Rate of Alfalfa (Medicago sativa L.) Seeds with Near Infrared Spectroscopy].
Wang, Xin-xun; Chen, Ling-ling; Zhang, Yun-wei; Mao, Pei-sheng
2016-03-01
Alfalfa (Medicago sativa L.) is the most commonly grown forage crop due to its better quality characteristics and high adaptability in China. However, there was 20%-80% hard seeds in alfalfa which could not be identified easily from non hard seeds which would cause the loss of seed utilization value and plant production. This experiment was designed for 121 samples of alfalfa. Seeds were collected according to different regions, harvested year and varieties. 31 samples were artificial matched as hard rates ranging from 20% to 80% to establish a model for hard seed rate by near infrared spectroscopy (NIRS) with Partial Least Square (PLS). The objective of this study was to establish a model and to estimate the efficiency of NIRS for determining hard rate of alfalfa seeds. The results showed that the correlation coefficient (R2(cal)) of calibration model was 0.981 6, root mean square error of cross validation (RMSECV) was 5.32, and the ratio of prediction to deviation (RPD) was 3.58. The forecast model in this experiment presented the satisfied precision. The proposed method using NIRS technology is feasible for identification and classification of hard seed in alfalfa. A new method, as nondestructive testing of hard seed rate, was provided to theoretical basis for fast nondestructive detection of hard seed rates in alfalfa.
Müller, Amanda
2015-01-01
This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…
DETERMINANTS OF INTEREST RATE SPREAD IN BANKING INDUSTRY
Arezoo GHASEMI
2016-02-01
Full Text Available This study is done to consider affecting factors on spread rate and define a suitable model of spread rate in banking industry. Spread rate is a difference between two related interest rates. In banking industry, spread rate is the difference between debts rate (especially for deposit and assets rate (Especially for loan. Interest rate spread has always been one of the most important and significant economic issues in different countries of the world. In this study, affecting factors on spread rate are considering in an Iranian bank during the last 19 month. Some variables such as NPL ratio, ratio of demand deposits on deposits, non interest income, and interest assets to assets, capital adequacy ratio, ROA ratio and inflation and exchange rate are analyzed on spread rate and a model is defined for bank according to prior studies and economical issues of Iran.
Stover, E.; Berger, G.; Wendel, M.; Petter, J.
2015-10-01
A method for non-contact 3D form testing of aspheric surfaces including determination of decenter and wedge errors and lens thickness is presented. The principle is based on the absolute measurement capability of multi-wavelength interferometry (MWLI). The approach produces high density 3D shape information and geometric parameters at high accuracy in short measurement times. The system allows inspection of aspheres without restrictions in terms of spherical departures, of segmented and discontinuous optics. The optics can be polished or ground and made of opaque or transparent materials.
Derks, E M; Zwinderman, A H; Gamazon, E R
2017-02-10
Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of FST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of FST. In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.
EOP预报误差对自主定轨结果影响分析%ANALYSIS OF INFLUENCE OF EOP PREDICTION ERROR ON AUTONOMOUS ORBIT DETERMINATION
张卫星; 刘万科; 龚晓颖
2011-01-01
In the autonomous orbit determination, we need EOP uploaded by ground station to achieve translation of Conventional Terrestrial System and Geocentric Celestial Reference System. However, when the satellite navigation system gets into autonomous navigation mode, the ground station can not upload the latest EOP, the system can only use long-term prediction of EOP. EOP prediction error will affect the ephemeris offered by the autonomous navigation system, and ultimately affect positioning accuracy of users. EOP prediction errors, the influence of EOP prediction errors to autonomous navigation system orbit ephemeris and positioning accuracy of users are discussed and analyzed. The results show that the prediction error of EOP almost has no influence on the radial error of satellite orbit and satellite clock error in long - term (110 days) autonomous orbit determination. It mainly influences the Plane Error (Along Error and Cross Error) and URE (User Range Error) and these errors show a certain periodicity. Moreover, these errors mainly influences North-South direction error and East-West direction error in pseudorange positioning.%对地球定向参数的预报误差变化趋势和地球定向参数预报误差对自主定轨生成星历影响及由此给定位产生的影响的分析结果表明:地球定向参数预报误差对长期(110天)自主定轨轨道的径向误差和卫星钟差几乎没有影响,主要影响水平方向(切向和法向)误差,并且这种影响呈现一定的周期性,由此给定位带来的误差影响主要在东西方向和南北方向.
Zhu, Jin; Wang, Dayan; Xie, Wanqing
2015-02-20
Diversified wavefront deformation is an inevitable phenomenon in intersatellite optical communication systems, which will decrease system performance. In this paper, we investigate the description of wavefront deformation and its influence on the packet error rate (PER) of digital pulse interval modulation (DPIM). With the wavelet method, the diversified wavefront deformation can be described by wavelet parameters: coefficient, dilation, and shift factors, where the coefficient factor represents the depth, dilation factor represents the area, and shift factor is for location. Based on this, the relationship between PER and wavelet parameters is analyzed from a theoretical viewpoint. Numerical results illustrate the validity of theoretical analysis: PER increases with the depth and area and decreases if location gets farther from the center of the optical antenna. In addition to describing diversified deformation, the advantage of the wavelet method over Zernike polynomials in computational complexity is shown via numerical example. This work provides a feasible method for the description along with influence analysis of diversified wavefront deformation from a practical viewpoint and will be helpful for designing optical systems.
Bit error rate analysis of Wi-Fi and bluetooth under the interference of 2.45 GHz RFID
无
2007-01-01
IEEE 802.11b WLAN (Wi-Fi) and IEEE 802.15.1 WPAN (bluetooth) are prevalent nowadays, and radio frequency identification (RFID) is an emerging technology which has wider applications. 802.11b occupies unlicensed industrial, scientific and medical (ISM) band (2.4-2.483 5 GHz) and uses direct sequence spread spectrum (DSSS) to alleviate the narrow band interference and fading. Bluetooth is also one user of ISM band and adopts frequency hopping spread spectrum (FHSS) to avoid the mutual interference. RFID can operate on multiple frequency bands, such as 135 KHz, 13.56 MHz and 2.45 GHz. When 2.45 GHz RFID device, which uses FHSS, collocates with 802.11b or bluetooth, the mutual interference is inevitable. Although DSSS and FHSS are applied to mitigate the interference, their performance degradation may be very significant. Therefore, in this article, the impact of 2.45 GHz RFID on 802.11b and bluetooth is investigated. Bit error rate (BER) of 802.11b and bluetooth are analyzed by establishing a mathematical model, and the simula-tion results are compared with the theoretical analysis to justify this mathematical model.
Sulyman Ahmed Iyanda
2005-01-01
Full Text Available The severity of fading on mobile communication channels calls for the combining of multiple diversity sources to achieve acceptable error rate performance. Traditional approaches perform the combining of the different diversity sources using either the conventional selective diversity combining (CSC, equal-gain combining (EGC, or maximal-ratio combining (MRC schemes. CSC and MRC are the two extremes of compromise between performance quality and complexity. Some researches have proposed a generalized selection combining scheme (GSC that combines the best M branches out of the L available diversity resources (M ≤ L . In this paper, we analyze a generalized selection combining scheme based on a threshold criterion rather than a fixed-size subset of the best channels. In this scheme, only those diversity branches whose energy levels are above a specified threshold are combined. Closed-form analytical solutions for the BER performances of this scheme over Nakagami fading channels are derived. We also discuss the merits of this scheme over GSC.
Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C
2016-06-01
We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.
Matjaz Merc; Igor Drstvensek; Matjaz Vogrin; Tomaz Brajlih; Tomaz Friedrich; Gregor Recnik
2014-01-01
Objective:Free-hand pedicle screw placement has a high incidence of pedicle perforation which can be reduced with fluoroscopy,navigation or an alternative rapid prototyping drill guide template.In our study the error rate of multi-level templates for pedicle screw placement in lumbar and sacral regions was evaluated.Methods:A case series study was performed on 11 patients.Seventy-two screws were implanted using multilevel drill guide templates manufactured with selective laser sintering.According to the optimal screw direction preoperatively defined,an analysis of screw misplacement was performed.Displacement,deviation and screw length difference were measured.The learning curve was also estimated.Results:Twelve screws (17％) were placed more than 3.125 mm out of its optimal position in the centre of pedicle.The tip of the 16 screws (22％) was misplaced more than 6.25 mm out of the predicted optimal position.According to our predefined goal,19 screws (26％) were implanted inaccurately.In 10 cases the screw length was selected incorrectly:1 (1％) screw was too long and 9 (13％) were too short.No clinical signs of neurovascular lesion were observed.Learning curve was insignificantly noticeable (P=0.129).Conclusion:In our study,the procedure of manufacturing and applying multi-level drill guide templates has a 26％ chance of screw misplacement.However,that rate does not coincide with pedicle perforation incidence and neurovascular injury.These facts along with a comparison to compatible studies make it possible to summarize that multi-level templates are satisfactorily accurate and allow precise screw placement with a clinically irrelevant mistake factor.Therefore templates could potentially represent a useful tool for routine pedicle screw placement.
Determining cardiac vagal threshold from short term heart rate complexity
Hamdan Rami Abou
2016-09-01
Full Text Available Evaluating individual aerobic exercise capacity is fundamental in sports and exercise medicine but associated with organizational and instrumental effort. Here, we extract an index related to common performance markers, the aerobic and anaerobic thresholds enabling the estimation of exercise capacity from a conventional sports watch supporting beatwise heart rate tracking. Therefore, cardiac vagal threshold (CVT was determined in 19 male subjects performing an incremental maximum exercise test. CVT varied around the anaerobic threshold AnT with mean deviation of 7.9 ± 17.7 W. A high correspondence of the two thresholds was indicated by Bland-Altman plots with limits of agreement −27.5 W and 43.4 W. Additionally, CVT was strongly correlated AnT (rp = 0.86, p < 0.001 and reproduced this marker well (rc = 0.81. We conclude, that cardiac vagal threshold derived from compression entropy time course can be useful to assess physical fitness in an uncomplicated way.
Dating of sediments and determination of sedimentation rate. Proceedings
Illus, E. [ed.
1998-08-01
The Second NKS (Nordic Nuclear Safety Research)/EKO-1 Seminar was held at the Finnish Centre for Radiation and Nuclear Safety (STUK) on April 2-3, 1997. The work of the NKS is based on 4-year programmes; the current programme having been planned for the years 1994-1997. The programme comprises 3 major fields, one of them being environmental effects (EKO). Under this umbrella there are 4 main projects. The EKO-1 project deals with marine radioecology, in particular bottom sediments and sediment processes. The programme of the second seminar consisted of 8 invited lecturers and 6 other scientific presentations. Dating of sediments and determination of sedimentation rate are important in all types of sedimentological study and model calculations of fluxes of substances in the aquatic environment. In many cases these tasks have been closely related to radioecological studies undertaken in marine and fresh water environments, because they are often based on measured depth profiles of certain natural or artificial radionuclides present in the sediments. During recent decades Pb-210 has proved to be very useful in dating of sediments, but some other radionuclides have also been successfully used, e.g. Pu-239,240, Am-241 and Cs-137. The difficulties existing and problems involved in dating of sediments, as well as solutions for resolving these problems are discussed in the presentations
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
Kory, Carol L.
2001-01-01
prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.
Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I
Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
Measuring Interest Rates as Determined by Thrift and Productivity
Woon Gyu Choi
2007-01-01
This paper investigates the behavior of real and nominal interest rates by combining consumption- and production-based models into a single general equilibrium framework. Based on the theoretical nonlinear relationships that link interest rates to both the marginal rates of substitution and transformation in a monetary production economy, our study develops an estimation and simulation procedure to predict historical series of interest rates. We find that the model predictions of interest rat...
Error analysis of real time and post processed or bit determination of GFO using GPS tracking
Schreiner, William S.
1991-01-01
The goal of the Navy's GEOSAT Follow-On (GFO) mission is to map the topography of the world's oceans in both real time (operational) and post processed modes. Currently, the best candidate for supplying the required orbit accuracy is the Global Positioning System (GPS). The purpose of this fellowship was to determine the expected orbit accuracy for GFO in both the real time and post-processed modes when using GPS tracking. This report presents the work completed through the ending date of the fellowship.
The determination of standard metabolic rate in fishes.
Chabot, D; Steffensen, J F; Farrell, A P
2016-01-01
This review and data analysis outline how fish biologists should most reliably estimate the minimal amount of oxygen needed by a fish to support its aerobic metabolic rate (termed standard metabolic rate; SMR). By reviewing key literature, it explains the theory, terminology and challenges underlying SMR measurements in fishes, which are almost always made using respirometry (which measures oxygen uptake, ṀO2 ). Then, the practical difficulties of measuring SMR when activity of the fish is not quantitatively evaluated are comprehensively explored using 85 examples of ṀO2 data from different fishes and one crustacean, an analysis that goes well beyond any previous attempt. The main objective was to compare eight methods to estimate SMR. The methods were: average of the lowest 10 values (low10) and average of the 10% lowest ṀO2 values, after removing the five lowest ones as outliers (low10%), mean of the lowest normal distribution (MLND) and quantiles that assign from 10 to 30% of the data below SMR (q0·1 , q0·15 , q0·2 , q0·25 and q0·3 ). The eight methods yielded significantly different SMR estimates, as expected. While the differences were small when the variability was low amongst the ṀO2 values, they were important (>20%) for several cases. The degree of agreement between the methods was related to the c.v. of the observations that were classified into the lowest normal distribution, the c.v. MLND (C.V.MLND ). When this indicator was low (≤5·4), it was advantageous to use the MLND, otherwise, one of the q0·2 or q0·25 should be used. The second objective was to assess if the data recorded during the initial recovery period in the respirometer should be included or excluded, and the recommendation is to exclude them. The final objective was to determine the minimal duration of experiments aiming to estimate SMR. The results show that 12 h is insufficient but 24 h is adequate. A list of basic recommendations for practitioners who use respirometry
Macarena Cubillos Mesías
Full Text Available To quantify interfraction patient setup-errors for radiotherapy based on cone-beam computed tomography and suggest safety margins accordingly.Positioning vectors of pre-treatment cone-beam computed tomography for different treatment sites were collected (n = 9504. For each patient group the total average and standard deviation were calculated and the overall mean, systematic and random errors as well as safety margins were determined.The systematic (and random errors in the superior-inferior, left-right and anterior-posterior directions were: for prostate, 2.5(3.0, 2.6(3.9 and 2.9(3.9mm; for prostate bed, 1.7(2.0, 2.2(3.6 and 2.6(3.1mm; for cervix, 2.8(3.4, 2.3(4.6 and 3.2(3.9mm; for rectum, 1.6(3.1, 2.1(2.9 and 2.5(3.8mm; for anal, 1.7(3.7, 2.1(5.1 and 2.5(4.8mm; for head and neck, 1.9(2.3, 1.4(2.0 and 1.7(2.2mm; for brain, 1.0(1.5, 1.1(1.4 and 1.0(1.1mm; and for mediastinum, 3.3(4.6, 2.6(3.7 and 3.5(4.0mm. The CTV-to-PTV margins had the smallest value for brain (3.6, 3.7 and 3.3mm and the largest for mediastinum (11.5, 9.1 and 11.6mm. For pelvic treatments the means (and standard deviations were 7.3 (1.6, 8.5 (0.8 and 9.6 (0.8mm.Systematic and random setup-errors were smaller than 5mm. The largest errors were found for organs with higher motion probability. The suggested safety margins were comparable to published values in previous but often smaller studies.
Transfer of Technology in Determining Lowest Achievable Emission Rate (LAER)
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Determination of Lowest Achievable Emission Rate for Coors Container Corporation
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo
Aver, Erik; Skillman, Evan D
2010-01-01
Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H~II regions. The helium abundance is sensitive to several physical parameters associated with the H~II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He~I emission lines. We demonstrate that introducing the electron temperature derived from the [O~III] emission lines as a prior, in a very conservative manner, produces...
No margin for error: determinants of achievement among disadvantaged children of immigrants
Alejandro Portes
2014-11-01
Full Text Available After reviewing the existing literature on second generation adaptation, the paper presents evidence of the process of segmented assimilation on the basis of the latest data from the Children of Immigrants Longitudinal Study (CILS. This evidence serves as a backdrop for the analysis of determinants of educational and occupational achievement among second generation youths who grow up under conditions of severe disadvantage. Based on interviews with a sample of fifty CILS respondents and their families, the analysis identifies four key causal mechanisms that are common to these «success stories» and that offer the basis for theoretical refinements on how the process of second generation adaptation actually unfolds and for policies to address the needs and aspirations of the most disadvantaged members of this population
Burgess, Ralph; Yang, Ziheng
2008-09-01
Estimation of population parameters for the common ancestors of humans and the great apes is important in understanding our evolutionary history. In particular, inference of population size for the human-chimpanzee common ancestor may shed light on the process by which the 2 species separated and on whether the human population experienced a severe size reduction in its early evolutionary history. In this study, the Bayesian method of ancestral inference of Rannala and Yang (2003. Bayes estimation of species divergence times and ancestral population sizes using DNA sequences from multiple loci. Genetics. 164:1645-1656) was extended to accommodate variable mutation rates among loci and random species-specific sequencing errors. The model was applied to analyze a genome-wide data set of approximately 15,000 neutral loci (7.4 Mb) aligned for human, chimpanzee, gorilla, orangutan, and macaque. We obtained robust and precise estimates for effective population sizes along the hominoid lineage extending back approximately 30 Myr to the cercopithecoid divergence. The results showed that ancestral populations were 5-10 times larger than modern humans along the entire hominoid lineage. The estimates were robust to the priors used and to model assumptions about recombination. The unusually low X chromosome divergence between human and chimpanzee could not be explained by variation in the male mutation bias or by current models of hybridization and introgression. Instead, our parameter estimates were consistent with a simple instantaneous process for human-chimpanzee speciation but showed a major reduction in X chromosome effective population size peculiar to the human-chimpanzee common ancestor, possibly due to selective sweeps on the X prior to separation of the 2 species.
Knox, C. E.
1978-01-01
Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.
Estimated Interest Rate Rules: Do they Determine Determinacy Properties?
Jensen, Henrik
2011-01-01
I demonstrate that econometric estimations of nominal interest rate rules may tell little, if anything, about an economy's determinacy properties. In particular, correct inference about the interest-rate response to inflation provides no information about determinacy. Instead, it could reveal...
The Determinants of Real Exchange Rate Volatility in Nigeria
Rahel
of the economy, government expenditures, interest rate movements as well as the lagged ... econometrics, together with the increasing availability of high quality data, ... market, Former U.S. Federal Reserve Board Chairman Alan Greespan once ... Juthathip (2009) results for developing Asia showed that real exchange rate.
Rate-gyro-integral constraint for ambiguity resolution in GNSS attitude determination applications.
Zhu, Jiancheng; Li, Tao; Wang, Jinling; Hu, Xiaoping; Wu, Meiping
2013-06-21
In the field of Global Navigation Satellite System (GNSS) attitude determination, the constraints usually play a critical role in resolving the unknown ambiguities quickly and correctly. Many constraints such as the baseline length, the geometry of multi-baselines and the horizontal attitude angles have been used extensively to improve the performance of ambiguity resolution. In the GNSS/Inertial Navigation System (INS) integrated attitude determination systems using low grade Inertial Measurement Unit (IMU), the initial heading parameters of the vehicle are usually worked out by the GNSS subsystem instead of by the IMU sensors independently. However, when a rotation occurs, the angle at which vehicle has turned within a short time span can be measured accurately by the IMU. This measurement will be treated as a constraint, namely the rate-gyro-integral constraint, which can aid the GNSS ambiguity resolution. We will use this constraint to filter the candidates in the ambiguity search stage. The ambiguity search space shrinks significantly with this constraint imposed during the rotation, thus it is helpful to speeding up the initialization of attitude parameters under dynamic circumstances. This paper will only study the applications of this new constraint to land vehicles. The impacts of measurement errors on the effect of this new constraint will be assessed for different grades of IMU and current average precision level of GNSS receivers. Simulations and experiments in urban areas have demonstrated the validity and efficacy of the new constraint in aiding GNSS attitude determinations.
Determination of sedimentation rates and absorption coefficient of ...
DR. MIKE HORSFALL
showed a decrease in the density of the carbonates and also correlated with the decrease in the mass (and ... Many chemical reactions form separable solid phase ... The precipitates were ... Zn2+ had the highest sedimentation rate constants,.
Determinants of Commercial Banks' Interest Rate Spread in Namibia ...
bank, commercial banks, development financial institutions and the non-banking sector which consists of .... its interest rate spread in order to shield itself against the risk. This suggests that the ..... PSG Wealth Management (Namibia). (2013).
Genetic determination of mortality rate in Danish dairy cows
Maia, Rafael Pimentel; Ask, Birgitte; Madsen, Per
2014-01-01
introduction of genetic material from other populations. The correlations between the sire components for death rate and slaughter rate were negative and small for the 3 populations, suggesting the existence of specific genetic mechanisms for each culling reason and common concurrent genetic mechanisms....... In the Holstein population the effects of the changes in the level of heterozygosity, breed composition and the increasing genetic trend act in the same direction increasing the death rate in the recent years. In the Jersey population, the effects of the level of heterozygosity and the breed proportion were small......, and only the increasing genetic trend can be pointed as a genetic cause to the observed increase in the mortality rate. In the Red Danish population neither the time-development pattern of the genetic trend nor the changes in the level of heterozygosity and breed composition could be causing the observed...
What determines the rate of growth and technological change?
ROMER, Paul M.
1989-01-01
There is substantial research about cross section and time series correlations between economic growth and various economic, social, demographic and political variables. After analyzing these correlations, the paper makes the following conclusions. Exogenous increases do not seem to cause increases in the rate of technological change, but instead seem to be associated with lower rates of return to capital. Increased openness to international trade speeds up growth and technological change as ...
Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo
2016-01-01
We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.
Ruutiainen, Alexander T; Durand, Daniel J; Scanlon, Mary H; Itri, Jason N
2013-03-01
To determine if the rate of major discrepancies between resident preliminary reports and faculty final reports increases during the final hours of consecutive 12-hour overnight call shifts. Institutional review board exemption status was obtained for this study. All overnight radiology reports interpreted by residents on-call between January 2010 and June 2010 were reviewed by board-certified faculty and categorized as major discrepancies if they contained a change in interpretation with the potential to impact patient management or outcome. Initial determination of a major discrepancy was at the discretion of individual faculty radiologists based on this general definition. Studies categorized as major discrepancies were secondarily reviewed by the residency program director (M.H.S.) to ensure consistent application of the major discrepancy designation. Multiple variables associated with each report were collected and analyzed, including the time of preliminary interpretation, time into shift study was interpreted, volume of studies interpreted during each shift, day of the week, patient location (inpatient or emergency department), block of shift (2-hour blocks for 12-hour shifts), imaging modality, patient age and gender, resident identification, and faculty identification. Univariate risk factor analysis was performed to determine the optimal data format of each variable (ie, continuous versus categorical). A multivariate logistic regression model was then constructed to account for confounding between variables and identify independent risk factors for major discrepancies. We analyzed 8062 preliminary resident reports with 79 major discrepancies (1.0%). There was a statistically significant increase in major discrepancy rate during the final 2 hours of consecutive 12-hour call shifts. Multivariate analysis confirmed that interpretation during the last 2 hours of 12-hour call shifts (odds ratio (OR) 1.94, 95% confidence interval (CI) 1.18-3.21), cross
A new determination of the solar rotation rate
Sheeley, N. R., Jr.; Wang, Y.-M.; Nash, A. G.
1992-01-01
We use 'stackplot' displays to compare observations of the photospheric magnetic field during sunspot cycle 21 with simulations based on the flux-transport model. Adopting nominal rates of diffusion, differential rotation, and meridional flow, we obtain slanted patterns similar to those of the observed field, even when the sources of flux are assigned random longitudes in the model. At low latitudes, the slopes of the nearly vertical patterns of simulated field are sensitive to the rotation rate used in the calculation, and insensitive to the rates of diffusion and flow during much of the sunspot cycle. Good agreement between the observed and simulated patterns requires a synodic equatorial rotation period of 26.75 +/- 0.05 days.
Satellite Photometric Error Determination
2015-10-18
of nearly specular reflections from most solar panels. Our primary purpose in presenting these two plots is to demonstrate the usefulness of...than a transformation for stars because the spectral energy distribution of satellites can change with phase angle and is subject to specular
Sharmila Vaz
Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.
Effectiveness of respiratory rates in determining clinical deterioration
Mølgaard, Rikke Rishøj; Larsen, Palle; Håkonsen, Sasa Jul
2016-01-01
Review question/objective: The objective of this systematic review is to identify, appraise and synthesize the best available evidence on the effectiveness of manually measuring respiratory rates for 60 s or less in detecting clinical deterioration of inpatients. More specifically, the review...
Bit error rate analysis of X-ray communication system%X射线通信系统的误码率分析∗
王律强; 苏桐; 赵宝升; 盛立志; 刘永安; 刘舵
2015-01-01
X-ray communication, which was firstly introduced by Keithe Gendreau in 2007, is potential to compete with conventional communication methods, such as microwave and laser communication, against space surroundings. As a result, a great deal of time and effort has been devoted to making the initial idea into reality in recent years. Eventually, the X-ray communication demonstration system based on the grid-controlled X-ray source and microchannel plate detector can deliver both audio and video information in a 6-meter vacuum tunnel. The point is how to evaluate this space X-ray demonstration system in a typical experimental way. The method is to design a specific board to measure the relationship between bit-error-rate and emitting power against various communicating distances. In addition, the data should be compared with the calculation and simulation results to estimate the referred theoretical model. The concept of using X-ray as signal carriers is confirmed by our first generation X-ray communication demonstration system. Specifically, the method is to use grid-controlled emission source as a transceiver while implementing the photon counting detector which can be regarded as an important orientation of future deep-space X-ray communication applications. As the key specification of any given communication system, bit-error-rate level should be informed first. In addition, the theoretical analysis by using Poisson noise model also has been implemented to support this novel communication concept. Previous experimental results indicated that the X-ray audio demonstration system requires a 10−4 bit-error-rate level with 25 kbps communication rate. The system bit-error-rate based on on-off keying (OOK) modulation is calculated and measured, which corresponds to the theoretical calculation commendably. Another point that should be taken into consideration is the emitting energy, which is the main restriction of current X-ray communication system. The designed
Errors associated with outpatient computerized prescribing systems
Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G
2011-01-01
Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428
Munshi Mahbubur Rahman
2015-02-01
Full Text Available An analytical approach is presented to evaluate the bit error rate (BER performance of a power line (PL communication system considering the combined influence of impulsive noise and background PL Gaussian noise. Middleton class-A noise model is considered to evaluate the effect of impulsive noise. The analysis is carried out to find the expression of the signal-to-noise ratio and BER considering orthogonal frequency division multiplexing (OFDM with binary phase shift keying modulation with coherent demodulation of OFDM sub-channels. The results are evaluated numerically considering the multipath transfer function model of PL with non-flat power spectral density of PL background noise over a bandwidth of 0.3–100 MHz. The results are plotted for several system and noise parameters and penalty because of impulsive noise is determined at a BER of 10^−6. The computed results show that the system suffers significant power penalty because of impulsive noise which is higher at higher channel bandwidth and can be reduced by increasing the number of OFDM subcarriers to some extent. The analytical results conform well with the simulation results reported earlier.
A comparison of determinants of infant mortality rate (IMR) between countries with high and low IMR.
Megawangi, R; Barnett, J B
1993-06-01
Weighted least squares regressions and pooled regression models were used to examine the determinants of infant mortality in developing countries. Data were obtained from the UNICEF's "State of the World's Children, 1987" for 87 countries with data on gross national product, percentage of literate females, percentage of low birth weight infants, daily caloric supply per capita as a percentage of the daily requirement, percentage of population with access to drinking water, total fertility rate, and the population to nurses ratio. Data was unavailable on breast feeding practices and government expenditures on health. Weighted procedures were used because of heteroscadascity problems: total fertility rate was associated with the variance in the error term. The results of pooled data showed that the female literacy rate had the strongest impact on infant mortality, followed by access to clean water and the number of population per nursing person. The impact of female literacy was still strong in high infant mortality countries when controls for gross national product were included. Puzzling findings were the negative sign of low birth weight and the insignificant effect of the total fertility rate. The suggestion was that low birth weight may be expressed already in the level of education and availability of health programs. Fertility's lack of wide variations may explain the insignificant effect. Findings showed that infant mortality was 22.19% higher in countries with gross national product under $500. In low infant mortality countries, none of the environmental variables significantly explained infant mortality. Low birth weight increased its impact on infant mortality among these countries but was still not significant. The findings suggested that infant mortality was most affected by low birth weight and amount of population per nurse in more affluent countries. Environmental factors were more important in explaining high levels of infant mortality in less
Donnellan, Andrea; Hager, Bradford H.; Larsen, Shawn
1988-01-01
Comparison of angles from historical triangulation observations dating as far back as 1932 with Global Positions System (GPS) measurements taken in 1987 indicates that rapid convergence may be taking place on decade timescales in the central and eastern part of the Ventura basin, an east-west trending trough bounded by thrust faults. Changes in angles over this time were analyzed using Prescott's modified Frank's method and in terms of a model which assumes that the regions to the north and south of the basin are rigid blocks undergoing relative motion. For the two block model, inversion of the observed angle changes over the last 28 years for the relative motion vector leads to north-south convergence across the basin of 30 + or - 5 mm/yr, with a left lateral component of 10 + or - 1 mm/yr in the Fillmore-Santa Paula area in the central part of the basin. The modified Frank's method yields strain rates of approximately 2 microrad/yr in both the east and central parts of the basin for measurements spanning the 1971 San Fernando earthquake. Assuming no east-west strain yeilds north-south compression of approximately 3.5 + or - .2 cm/yr. Comparison of triangulation data prior to the earthquake shows no strain outside the margin of error. The convergence rates determined by geodetic techniques are consistent with geologic observations in the area. Such large geodetic deformation rates, with no apparent near-surface creep on the major thrust, can be understood if these faults become subhorizontal at relatively shallow depths and if the subhorizontal portions of the faults are creeping. An alternative explanation of the large displacement rates might be that the pumping of oil in the vicinity of the benchmarks caused large horizontal motions, although it is unlikely that meter scale horizontal motions are due to oil withdrawal. These and other hypotheses are evaluated to better constrain the tectonics of this active region.
METHOD AND APPARATUS FOR DETERMINING AMALGAM DECOMPOSITION RATE
Johnson, R.W.; Wright, C.C.
1962-04-24
A method and apparatus for measuring the rate at which an amalgam decomposes in contact with aqueous solutions are described. The amalgam and an aqueous hydroxide solution are disposed in an electrolytic cell. The amalgam is used as the cathode of the cell, and an electrode and anode are disposed in the aqueous solution. A variable source of plating potential is connected across the cell. The difference in voltage between the amalgam cathode and a calibrated source of reference potential is used to control the variable source to null the difference in voltage and at the same time to maintain the concentration of the amalgam at some predetermined constant value. The value of the current required to maintain this concentration constant is indicative of the decomposition rate of the amalgam. (AEC)
The determination of standard metabolic rate in fishes
Chabot, Denis; Steffensen, John Fleng; Farrell, A.P.
2016-01-01
This review and data analysis outline how fish biologists should most reliably estimate the minimal amount of oxygen needed by a fish to support its aerobic metabolic rate (termed standard metabolic rate; SMR). By reviewing key literature, it explains the theory, terminology and challenges...... underlying SMR measurements in fishes, which are almost always made using respirometry (which measures oxygen uptake, ṀO2 ). Then, the practical difficulties of measuring SMR when activity of the fish is not quantitatively evaluated are comprehensively explored using 85 examples of ṀO2 data from different...... fishes and one crustacean, an analysis that goes well beyond any previous attempt. The main objective was to compare eight methods to estimate SMR. The methods were: average of the lowest 10 values (low10) and average of the 10% lowest ṀO2 values, after removing the five lowest ones as outliers (low10...
Validity of portfolio assessment: which qualities determine ratings?
Driessen, Erik W; Overeem, Karlijn; van Tartwijk, Jan; van der Vleuten, Cees P M; Muijtjens, Arno M M
2006-09-01
The portfolio is becoming increasingly accepted as a valuable tool for learning and assessment. The validity of portfolio assessment, however, may suffer from bias due to irrelevant qualities, such as lay-out and writing style. We examined the possible effects of such qualities in a portfolio programme aimed at stimulating Year 1 medical students to reflect on their professional and personal development. In later curricular years, this portfolio is also used to judge clinical competence. We developed an instrument, the Portfolio Analysis Scoring Inventory, to examine the impact of form and content aspects on portfolio assessment. The Inventory consists of 15 items derived from interviews with experienced mentors, the literature, and the criteria for reflective competence used in the regular portfolio assessment procedure. Forty portfolios, selected from 231 portfolios for which ratings from the regular assessment procedure were available, were rated by 2 researchers, independently, using the Inventory. Regression analysis was used to estimate the correlation between the ratings from the regular assessment and those resulting from the Inventory items. Inter-rater agreement ranged from 0.46 to 0.87. The strongest predictor of the variance in the regular ratings was 'quality of reflection' (R 0.80; R2 66%). No further items accounted for a significant proportion of variance. Irrelevant items, such as writing style and lay-out, had negligible effects. The absence of an impact of irrelevant criteria appears to support the validity of the portfolio assessment procedure. Further studies should examine the portfolio's validity for the assessment of clinical competence.
Determination of reactivity rates of silicate particle-size fractions
Angélica Cristina Fernandes Deus; Leonardo Theodoro Büll; Juliano Corulli Corrêa; Roberto Lyra Villas Boas
2014-01-01
The efficiency of sources used for soil acidity correction depends on reactivity rate (RR) and neutralization power (NP), indicated by effective calcium carbonate (ECC). Few studies establish relative efficiency of reactivity (RER) for silicate particle-size fractions, therefore, the RER applied for lime are used. This study aimed to evaluate the reactivity of silicate materials affected by particle size throughout incubation periods in comparison to lime, and to calculate the RER for silicat...
Auctioning Process Innovations when Losers' Bids Determine Royalty Rates
Fan, Cuihong; Jun, Byoung Heon; Elmar G. Wolfstetter
2009-01-01
We consider a licensing mechanism for process innovations that combines a license auction with royalty contracts to those who lose the auction. Firms’ bids are dual signals of their cost reductions: the winning bid signals the own cost reduction to rival oligopolists, whereas the losing bid influences the beliefs of the innovator who uses that information to set the royalty rate. We derive conditions for existence of a separating equilibrium, explain why a sufficiently high reserve price is e...
Riané de Bruyn
2013-03-01
Full Text Available Evidence in favor of the monetary model of exchange rate determination for the South African Rand is, at best, mixed. A co-integrating relationship between the nominal exchange rate and monetary fundamentals forms the basis of the monetary model. With the econometric literature suggesting that the span of the data, not the frequency, determines the power of the co-integration tests and the studies on South Africa primarily using short-span data from the post-Bretton Woods era, we decided to test the long-run monetary model of exchange rate determination for the South African Rand relative to the US Dollar using annual data from 1910 – 2010. The results provide some support for the monetary model in that long-run co-integration is found between the nominal exchange rate and the output and money supply deviations. However, the theoretical restrictions required by the monetary model are rejected. A vector error-correction model identifies both the nominal exchange rate and the monetary fundamentals as the channel for the adjustment process of deviations from the long-run equilibrium exchange rate. A subsequent comparison of nominal exchange rate forecasts based on the monetary model with those of the random walk model suggests that the forecasting performance of the monetary model is superior.
Music structure determines heart rate variability of singers
Vickhoff, Björn; Malmgren, Helge; Åström, Rickard; Nyberg, Gunnar; Ekström, Seth-Reino; Engwall, Mathias; Snygg, Johan; Nilsson, Michael; Jörnsten, Rebecka
2013-01-01
Choir singing is known to promote wellbeing. One reason for this may be that singing demands a slower than normal respiration, which may in turn affect heart activity. Coupling of heart rate variability (HRV) to respiration is called Respiratory sinus arrhythmia (RSA). This coupling has a subjective as well as a biologically soothing effect, and it is beneficial for cardiovascular function. RSA is seen to be more marked during slow-paced breathing and at lower respiration rates (0.1 Hz and below). In this study, we investigate how singing, which is a form of guided breathing, affects HRV and RSA. The study comprises a group of healthy 18 year olds of mixed gender. The subjects are asked to; (1) hum a single tone and breathe whenever they need to; (2) sing a hymn with free, unguided breathing; and (3) sing a slow mantra and breathe solely between phrases. Heart rate (HR) is measured continuously during the study. The study design makes it possible to compare above three levels of song structure. In a separate case study, we examine five individuals performing singing tasks (1–3). We collect data with more advanced equipment, simultaneously recording HR, respiration, skin conductance and finger temperature. We show how song structure, respiration and HR are connected. Unison singing of regular song structures makes the hearts of the singers accelerate and decelerate simultaneously. Implications concerning the effect on wellbeing and health are discussed as well as the question how this inner entrainment may affect perception and behavior. PMID:23847555
Determining the Success Rate of a Modified Underlay Myringoplasty Technique
AH Faramarzi
2012-12-01
Full Text Available Abstract Background & aim: Chronic otitis media surgery is the most common procedure in the field of otology in developing countries. Subtotal and total tympanic membrane perforation with inadequate anterior remnant is associated with higher rate of graft failure. This study aimed to evaluate the anatomical and functional outcomes of a modified underlay myringoplasty technique. Methods: In the present prospective clinical study, 45 patients with subtotal or total tympanic membrane perforation and inadequate anterior remnant underwent tympanoplasty (+/- mastoidectomy. Anterior tip of the temporalis fascia was secured in a mucosal pocket on the lateral wall of Eustachian tube orifice. Data on graft take rate, preoperative and postoperative hearing status and intraoperative findings were analyzed. The anatomical and functional findings of this procedure were analyzed by paired t-test. Results: A graft success rate of 91.1%, without lateralization, blunting, atelectasia or epithelial pearls was achieved in this study. About 24 % of patients had an air bone gap within 25db before intervention, which increased to 71% postoperatively.(P<0.001. Conclusion: It seems that the current technique could be a convenient and suitable method for cases with subtotal or total tympanic membrane perforation and inadequate anterior remnant. Key words: Tympanic membrane, Perforation, Tympanoplasty, Eustachian tube
Determination of dose rates from natural radionuclides in dental materials
Veronese, I. [Dipartimento di Fisica, Universita degli Studi di Milano, Milan (Italy) and INFN, Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Milan (Italy)]. E-mail: ivan.veronese@unimi.it; Guzzi, G. [AIRMEB - Italian Association for Metal and Biocompatibility Research, Milan (Italy); Giussani, A. [Dipartimento di Fisica, Universita degli Studi di Milano, Milan (Italy); INFN, Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Milan (Italy); Cantone, M.C. [Dipartimento di Fisica, Universita degli Studi di Milano, Milan (Italy); INFN, Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Milan (Italy); Ripamonti, D. [Dipartimento di Fisica, Universita degli Studi di Milano, Milan (Italy)
2006-07-01
Different types of materials used for dental prosthetics restoration, including feldspathic ceramics, glass ceramics, zirconia-based ceramics, alumina-based ceramics, and resin-based materials, were investigated with regard to content of natural radionuclides by means of thermoluminescence beta dosimetry and gamma spectrometry. The gross beta dose rate from feldspathic and glass ceramics was about ten times higher than the background measurement, whereas resin-based materials generated negligible beta dose rate, similarly to natural tooth samples. The specific activity of uranium and thorium was significantly below the levels found in the period when addition of uranium to dental porcelain materials was still permitted. The high-beta dose levels observed in feldspathic porcelains and glass ceramics are thus mainly ascribable to {sup 4}K, naturally present in these specimens. Although the measured values are below the recommended limits, results indicate that patients with prostheses are subject to higher dose levels than other members of the population. Alumina- and zirconia-based ceramics might be a promising alternative, as they have generally lower beta dose rates than the conventional porcelain materials. However, the dosimetry results, which imply the presence of inhomogeneously distributed clusters of radionuclides in the sample matrix, and the still unsuitable structural properties call for further optimization of these materials.
Seluianov, V N; Kalinin, E M; Pak, G D; Maevskaia, V I; Konrad, A H
2011-01-01
The aim of this work is to develop methods for determining the anaerobic threshold according to the rate of ventilation and cardio interval variability during the test with stepwise increases load on the cycle ergometer and treadmill. In the first phase developed the method for determining the anaerobic threshold for lung ventilation. 49 highly skilled skiers took part in the experiment. They performed a treadmill ski-walking test with sticks with gradually increasing slope from 0 to 25 degrees, the slope increased by one degree every minute. In the second phase we developed a method for determining the anaerobic threshold according dynamics ofcardio interval variability during the test. The study included 86 athletes of different sports specialties who performed pedaling on the cycle ergometer "Monarch" in advance. Initial output was 25 W, power increased by 25 W every 2 min. The pace was steady--75 rev/min. Measurement of pulmonary ventilation and oxygen and carbon dioxide content was performed using gas analyzer COSMED K4. Sampling of arterial blood was carried from the ear lobe or finger, blood lactate concentration was determined using an "Akusport" instrument. RR-intervals registration was performed using heart rate monitor Polar s810i. As a result, it was shown that the graphical method for determining the onset of anaerobic threshold ventilation (VAnP) coincides with the accumulation of blood lactate 3.8 +/- 0.1 mmol/l when testing on a treadmill and 4.1 +/- 0.6 mmol/1 on the cycle ergometer. The connection between the measure of oxygen consumption at VAnP and the dispersion of cardio intervals (SD1), derived regression equation: VO2AnT = 0.35 + 0.01SD1W + 0.0016SD1HR + + 0.106SD1(ms), l/min; (R = 0.98, error evaluation function 0.26 L/min, p < 0.001), where W (W)--Power, HR--heart rate (beats/min), SD1--cardio intervals dispersion (ms) at the moment of registration of cardio interval threshold.
McLaughlin, Douglas B
2012-01-01
The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors.
Guangfeng Zhang; Qiong Zhang; Muhammad Tariq Majeed
2013-01-01
Using two measures of private information and high-frequency transaction data from the leading interdealer electronic broking system Reuters D2000-2, we examine the association between exchange rate return and contemporaneous order flow and the predictability power of lagged order flow on the future exchange rate return. Our empirical analysis demonstrates that at high frequency (5, 10, 15, 20, 25, and 30 min) there exists strong positive association between exchange rate returns and contempo...
Determination of the optimal exchange rate via control of the domestic interest rate in Nigeria
Virtue Ekhosuehi; Sunday Ogbonmwan
2014-01-01
We consider an economic scenario where the government seeks to achieve a favourable balance-of-payments over a fixed planning horizon through exchange rate policy and control of the domestic interest rate. We view the dynamics of such an economy as a bounded optimal control problem where the exchange rate is the state variable and the domestic interest rate is the control variable. The idea of balance-of-payments is used as a theoretical underpinning to specify the objective function. By assu...
L’errore nel laboratorio di Microbiologia
Paolo Lanzafame
2006-03-01
Full Text Available Error management plays one of the most important roles in facility process improvement efforts. By detecting and reducing errors quality and patient care improve. The records of errors was analysed over a period of 6 months and another was used to study the potential bias in the registrations.The percentage of errors detected was 0,17% (normalised 1720 ppm and the errors in the pre-analytical phase was the largest part.The major rate of errors was generated by the peripheral centres which send only sometimes the microbiology tests and don’t know well the specific procedures to collect and storage biological samples.The errors in the management of laboratory supplies were reported too. The conclusion is that improving operators training, in particular concerning samples collection and storage, is very important and that an affective system of error detection should be employed to determine the causes and the best corrective action should be applied.
Singh, T.S.C.; Mazumdar, P.S.; Gartia, R.K. (Manipur Univ., Canchipur (India). Dept. of Physics)
1990-05-14
An attempt has been made to find out precisely the errors involved in the determination of the activation energy (E) of a thermoluminescence (TL) peak by using the different variants of method of various heating rates. It has been found that for all practical purposes two methods can be considered to be independent of the order of kinetics of the TL process. Finally, the applicability of our findings has been tested experimentally by considering a non-first-order TL peak. The results suggest that the theoretical errors in all the cases are less than the experimental ones and hence these methods can be safely used for all types of TL peaks irrespective of their order of kinetics. (author).
The Determination of the Star Formation Rate in Galaxies
Barbaro, G
1997-01-01
A spectrophotometric model able to compute the integrated spectrum of a galaxy, including the contribution both of the stellar populations and of the ionized interstellar gas of the HII regions powered by young hot stars, has been used to study several spectral features and photometric quantities in order to derive calibrations of the star formation history of late type galaxies. Attention has been paid to analyze the emission of the Balmer lines and the [OII]$\\lambda$3727 line to test their attitude at providing estimates of the present star formation rate in galaxies. Other features, like D$_{4000}$ and the equivalent width of the H$_{\\delta}$ line, influenced by the presence of intermediate age stars, have been considered. Several ways of estimating the star formation rates in normal galaxies are discussed and some considerations concerning the applicability of the models are presented. Criteria have been also studied for ascertaining the presence of a burst, current or ended not long ago. Bursts usually h...
Torsi, G; Reschiglian, P; Locatelli, C; Melucci, D
1997-01-01
We describe a new method and the relevant instrumentation necessary for its implementation in the analysis of metals associated with particulate matter in air. The procedure can be divided into two steps: in the first step the sample is accumulated in a device through electrostatic precipitation whose center is a graphite tube; in the second step the graphite tube itself is used as an atomization device for the determination of the metals present in the sample through the electrothermal atomic absorption technique. The method is simple, fast, accurate, and inexpensive. Moreover, if the experimental conditions are well chosen, there is no need for calibration, which is very convenient in the case of samples such as particulate matter in the air. The elements that can be determined with the present apparatus are Hg, Cd, Tl, Ag, Mg, and Mn. These are highly or medium volatile because the materials used cannot reach very high temperatures for long periods. The experiments are confined to air, but other gases, in which a corona discharge is possible, would give the same results. With the method proposed, it was possible to show that the official method for Pb determination in the urban environment of Bologna presents a negative systematic error of about 25%.
Fertilization rate and its determinants in intracytoplasmic sperm injection
Jawed, Shireen; Rehman, Rehana; Ali, Mohammad Ashfaq; Abdullah, Umme Hani; Gul, Hina
2016-01-01
Objective: To identify predictors of fertilization rate in patients of unexplained infertility after intracytoplasmic sperm injection (ICSI). Methods: Retrospective analysis of females (282) enrolled in quasi experimental design for ICSI at “Islamabad Clinic Serving Infertile Couples” was carried out from July 2013 till June 2014. Females with unexplained infertility were included, whereas well defined male and female causes of infertility were excluded. Fertilization rate (FR) was calculated as percentage transformation of micro injected oocytes into two pronuclei. Categorical variable of FR defined on the basis of 50% FR grouped females; Group I with FR ≤50% and Group II with FR >50%. The groups were compared in terms of demographic variables, base line hormones and oocyte parameters. Univariate logistic regression was executed to obtain odds ratio with 95% confidence interval to quantify the association of predictors like age, duration of infertility, oocytes parameters, hormones; Estradiol, progesterone, follicle stimulating hormone (FSH), luteinizing hormone, prolactin and cytokines interleukin-Iβ (IL-Iβ) with the FR. Results: In our study out of 282 females, 19 (6.73%) were in group I and 263 (93.26%) comprised of Group II. Females with high FR(group II) had low Progesterone and FSH (p=0.04, p=0.02) respectively. Mature oocytes (OR: 0.35; 95% CI 1 – 2.56) and IL-Iβ in follicular phase (OR: 1.04; 95% CI: 0.000- 1.20) were significant positive predictors of FR while peak progesterone and FSH had significant negative effect on it Conclusion: Fertilization of oocytes in females of unexplained infertility depended on maturity of oocytes and optimal amounts of ILI- β released by developing follicles in the follicular phase of stimulation cycles of ICSI. PMID:27022334
Ohteru, Shoko; Kishine, Keiji
The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.
What determines the sensitivity of the real exchange rate in Colombia to a terms of trade shock?
Parra-Alvarez, Juan Carlos; Mahadeva, Lavan
2012-01-01
of preference shifts, technological relative price trends and errors in sectoral data. We find that the elasticity of the input of the distribution sector in transforming imports from domestic consumption reliably indicates complementarity, implying that rigidities in this sector matter in determining......We show that the sensitivity of the real exchange rate to terms of trade shocks is greater the lower the elasticity of final and derived demand between domestic and imported items. We develop a novel Kalman filter-based method to estimate these key parameters for Colombia, taking account...
Mass loss rate determinations of southern OB stars
Benaglia, P; Koribalski, B S
2001-01-01
A sample of OB stars (eleven Of, one O and one B supergiant) has been surveyed with the Australia Telescope Compact Array at 4.8 and 8.64 GHz with a resolution of 2'' -- 4''. Five stars were detected; three of them have negative spectral indices, consistent with non-thermal emission, and two have positive indices. The thermal radiation from HD 150135 and HD 163181 can be explained as coming from an optically thick ionized stellar wind. The non-thermal radiation from CD-47 4551, HD 124314 and HD 150136 possibly comes from strong shocks in the wind itself and/or in the wind colliding region if the stars have a massive early-type companion. The percentage of non-thermal emitters among detected O stars has increased up to ~50%. The Of star HD 124314 clearly shows flux density variations. Mass loss rates (or upper limits) were derived for all the observed stars and the results compared with non-radio measurements and theoretical predictions.
Determination of reactivity rates of silicate particle-size fractions
Angélica Cristina Fernandes Deus
2014-04-01
Full Text Available The efficiency of sources used for soil acidity correction depends on reactivity rate (RR and neutralization power (NP, indicated by effective calcium carbonate (ECC. Few studies establish relative efficiency of reactivity (RER for silicate particle-size fractions, therefore, the RER applied for lime are used. This study aimed to evaluate the reactivity of silicate materials affected by particle size throughout incubation periods in comparison to lime, and to calculate the RER for silicate particle-size fractions. Six correction sources were evaluated: three slags from distinct origins, dolomitic and calcitic lime separated into four particle-size fractions (2, 0.84, 0.30 and <0.30-mm sieves, and wollastonite, as an additional treatment. The treatments were applied to three soils with different texture classes. The dose of neutralizing material (calcium and magnesium oxides was applied at equal quantities, and the only variation was the particle-size material. After a 90-day incubation period, the RER was calculated for each particle-size fraction, as well as the RR and ECC of each source. The neutralization of soil acidity of the same particle-size fraction for different sources showed distinct solubility and a distinct reaction between silicates and lime. The RER for slag were higher than the limits established by Brazilian legislation, indicating that the method used for limes should not be used for the slags studied here.
Breuze, G.; Fanet, H.; Serre, J. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Colas, D.; Garnero, E.; Hamet, T. [Electricite de France (EDF), 77 - Ecuelles (France)
1993-12-31
Fiber optics data transmission from numerous multiplexed sensors, is potentially attractive for nuclear plant applications. Multimode silica fiber behaviour during steady state gamma ray exposure is studied as a joint programme between LETI CE/SACLAY and EDF Renardieres: transmitted optical power and bit error rate have been measured on a 100 m optical fiber.
Kikuchi, Kazuro
2012-02-27
We develop a systematic method for characterizing semiconductor-laser phase noise, using a low-speed offline digital coherent receiver. The field spectrum, the FM-noise spectrum, and the phase-error variance measured with such a receiver can completely describe phase-noise characteristics of lasers under test. The sampling rate of the digital coherent receiver should be much higher than the phase-fluctuation speed. However, 1 GS/s is large enough for most of the single-mode semiconductor lasers. In addition to such phase-noise characterization, interpolating the taken data at 1.25 GS/s to form a data stream at 10 GS/s, we can predict the bit-error rate (BER) performance of multi-level modulated optical signals at 10 Gsymbol/s. The BER degradation due to the phase noise is well explained by the result of the phase-noise measurements.
Determination of erosion rates with cosmogenic 26Al
Strack, E.; Heisinger, B.; Dockhorn, B.; Hartmann, F. J.; Korschinek, G.; Nolte, E.; Morteani, G.; Petitjean, C.; Neumaier, S.
1994-06-01
A preliminary depth profile of the long-lived cosmogenic radioisotope 26Al(5 +) in quartz samples from pre-drill cores of the continental deep drill core (KTB) at Egerer Waldhaeusl, Poppenreuth and Puellersreuth was measured with accelerator mass spectrometry (AMS). These drill cores are situated in Upper Palatinate (Oberpfalz, Germany) in a region with very low erosion. The cosmogenic production of 26Al in quartz was calculated. The calculation is based on two reactions: spallation reactions on silicon in the first few meters and the capture reaction of slow negative muons Si(μ -, ν μxn, which is the dominant process below a few meters. The branching ratio of the μ - capture reaction in silicon to 26Al(5 +) was determined by irradiating a quartz sample with slow negative muons at PSI in Villigen (Switzerland) and by measuring the produced 26Al by AMS. Calculations of 26Al depth profiles taking erosion into account were performed. The agreement between the depth profile calculated with no erosion and the measured preliminary profile is very satisfactory.
[Survey in hospitals. Nursing errors, error culture and error management].
Habermann, Monika; Cramer, Henning
2010-09-01
Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.
Rate of meristem maturation determines inflorescence architecture in tomato
Park, Soon Ju; Jiang, Ke; Schatz, Michael C.; Lippman, Zachary B.
2012-01-01
Flower production and crop yields are highly influenced by the architectures of inflorescences. In the compound inflorescences of tomato and related nightshades (Solanaceae), new lateral inflorescence branches develop on the flanks of older branches that have terminated in flowers through a program of plant growth known as “sympodial.” Variability in the number and organization of sympodial branches produces a remarkable array of inflorescence architectures, but little is known about the mechanisms underlying sympodial growth and branching diversity. One hypothesis is that the rate of termination modulates branching. By performing deep sequencing of transcriptomes, we have captured gene expression dynamics from individual shoot meristems in tomato as they gradually transition from a vegetative state to a terminal flower. Surprisingly, we find thousands of age-dependent expression changes, even when there is little change in meristem morphology. From these data, we reveal that meristem maturation is an extremely gradual process defined molecularly by a “meristem maturation clock.” Using hundreds of stage-enriched marker genes that compose this clock, we show that extreme branching, conditioned by loss of expression of the COMPOUND INFLORESCENCE gene, is driven by delaying the maturation of both apical and lateral meristems. In contrast, we find that wild tomato species display a delayed maturation only in apical meristems, which leads to modest branching. Our systems genetics approach reveals that the program for inflorescence branching is initiated surprisingly early during meristem maturation and that evolutionary diversity in inflorescence architecture is modulated by heterochronic shifts in the acquisition of floral fate. PMID:22203998
Prószyński, Witold; Kwaśniak, Mieczysław
2016-12-01
The paper presents the results of investigating the effect of increase of observation correlations on detectability and identifiability of a single gross error, the outlier test sensitivity and also the response-based measures of internal reliability of networks. To reduce in a research a practically incomputable number of possible test options when considering all the non-diagonal elements of the correlation matrix as variables, its simplest representation was used being a matrix with all non-diagonal elements of equal values, termed uniform correlation. By raising the common correlation value incrementally, a sequence of matrix configurations could be obtained corresponding to the increasing level of observation correlations. For each of the measures characterizing the above mentioned features of network reliability the effect is presented in a diagram form as a function of the increasing level of observation correlations. The influence of observation correlations on sensitivity of the w-test for correlated observations (Förstner 1983, Teunissen 2006) is investigated in comparison with the original Baarda's w-test designated for uncorrelated observations, to determine the character of expected sensitivity degradation of the latter when used for correlated observations. The correlation effects obtained for different reliability measures exhibit mutual consistency in a satisfactory extent. As a by-product of the analyses, a simple formula valid for any arbitrary correlation matrix is proposed for transforming the Baarda's w-test statistics into the w-test statistics for correlated observations.
System and method for determining an ammonia generation rate in a three-way catalyst
Sun, Min; Perry, Kevin L; Kim, Chang H
2014-12-30
A system according to the principles of the present disclosure includes a rate determination module, a storage level determination module, and an air/fuel ratio control module. The rate determination module determines an ammonia generation rate in a three-way catalyst based on a reaction efficiency and a reactant level. The storage level determination module determines an ammonia storage level in a selective catalytic reduction (SCR) catalyst positioned downstream from the three-way catalyst based on the ammonia generation rate. The air/fuel ratio control module controls an air/fuel ratio of an engine based on the ammonia storage level.
Munshi Mahbubur Rahman; Satya Prasad Majumder
2015-01-01
An analytical approach is presented to evaluate the bit error rate (BER) performance of a power line (PL) communication system considering the combined influence of impulsive noise and background PL Gaussian noise. Middleton class-A noise model is considered to evaluate the effect of impulsive noise. The analysis is carried out to find the expression of the signal-to-noise ratio and BER considering orthogonal frequency division multiplexing (OFDM) with binary phase shift keying modulation wit...
Creswick, Nerida; Westbrook, Johanna Irene
2015-09-01
To measure the weekly medication advice-seeking networks of hospital staff, to compare patterns across professional groups, and to examine these in the context of prescribing error rates. A social network analysis was conducted. All 101 staff in 2 wards in a large, academic teaching hospital in Sydney, Australia, were surveyed (response rate, 90%) using a detailed social network questionnaire. The extent of weekly medication advice seeking was measured by density of connections, proportion of reciprocal relationships by reciprocity, number of colleagues to whom each person provided advice by in-degree, and perceptions of amount and impact of advice seeking between physicians and nurses. Data on prescribing error rates from the 2 wards were compared. Weekly medication advice-seeking networks were sparse (density: 7% ward A and 12% ward B). Information sharing across professional groups was modest, and rates of reciprocation of advice were low (9% ward A, 14% ward B). Pharmacists provided advice to most people, and junior physicians also played central roles. Senior physicians provided medication advice to few people. Many staff perceived that physicians rarely sought advice from nurses when prescribing, but almost all believed that an increase in communication between physicians and nurses about medications would improve patient safety. The medication networks in ward B had higher measures for density, reciprocation, and fewer senior physicians who were isolates. Ward B had a significantly lower rate of both procedural and clinical prescribing errors than ward A (0.63 clinical prescribing errors per admission [95%CI, 0.47-0.79] versus 1.81/ admission [95%CI, 1.49-2.13]). Medication advice-seeking networks among staff on hospital wards are limited. Hubs of advice provision include pharmacists, junior physicians, and senior nurses. Senior physicians are poorly integrated into medication advice networks. Strategies to improve the advice-giving networks between senior
Gordon, Steven C.
1993-01-01
Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.
关于专利文件中“明显笔误”的认定%Determination of Obvious Clerical Errors in Patent Documents
陈优
2014-01-01
Clerical error is a common problem in patent examination. Errors in Patent application document are often caused by the carelessness of applicants. Applicants always claim that these errors are obvious clerical errors, so that they should be al owed to correct or clarify them by proper modification. However, not al errors caused by writing mistakes can be determined as obvious clerical errors. This paper discussed specific errors which should be determined as obvious clerical errors in patent examining practice, by analyzing a typical case.%“笔误”是专利审查过程中经常遇到的问题。由于申请人撰写时的粗心及疏忽大意，经常导致申请文件中存在撰写错误的地方。申请人在对该错误进行陈述或修改时，通常会指出该错误属于“明显笔误”，以此应允许申请人进行修改或是通过合理解释予以澄清。然而，并不是所有撰写上的失误均可以认定为“明显笔误”。文章结合具体案例探讨，对于专利申请文件中由于申请人疏忽而导致的撰写错误，究竟如何认定其是否属于“明显笔误”。
Marr, Greg C.
2003-01-01
Differencing multiple, simultaneous Tracking and Data Relay Satellite System (TDRSS) one-way Doppler passes can yield metric tracking data usable for orbit determination for (low-cost) spacecraft which do not have TDRSS transponders or local oscillators stable enough to allow the one-way TDRSS Doppler tracking data to be used for early mission orbit determination. Orbit determination error analysis results are provided for low Earth orbiting spacecraft for various early mission tracking scenarios.
Error Rates of M-PAM and M-QAM in Generalized Fading and Generalized Gaussian Noise Environments
Soury, Hamza
2013-07-01
This letter investigates the average symbol error probability (ASEP) of pulse amplitude modulation and quadrature amplitude modulation coherent signaling over flat fading channels subject to additive white generalized Gaussian noise. The new ASEP results are derived in a generic closed-form in terms of the Fox H function and the bivariate Fox H function for the extended generalized-K fading case. The utility of this new general closed-form is that it includes some special fading distributions, like the Generalized-K, Nakagami-m, and Rayleigh fading and special noise distributions such as Gaussian and Laplacian. Some of these special cases are also treated and are shown to yield simplified results.
Bustamante, Dulce M; Lord, Cynthia C
2010-06-01
Infection rate is an estimate of the prevalence of arbovirus infection in a mosquito population. It is assumed that when infection rate increases, the risk of arbovirus transmission to humans and animals also increases. We examined some of the factors that can invalidate this assumption. First, we used a model to illustrate how the proportion of mosquitoes capable of virus transmission, or infectious, is not a constant fraction of the number of infected mosquitoes. Thus, infection rate is not always a straightforward indicator of risk. Second, we used a model that simulated the process of mosquito sampling, pooling, and virus testing and found that mosquito infection rates commonly underestimate the prevalence of arbovirus infection in a mosquito population. Infection rate should always be used in conjunction with other surveillance indicators (mosquito population size, age structure, weather) and historical baseline data when assessing the risk of arbovirus transmission.
78 FR 78275 - Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2014
2013-12-26
... Federal Railroad Administration 49 CFR Part 219 Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2014 AGENCY: Federal Railroad Administration (FRA), DOT. ACTION: Notice of determination... therefore determined that the minimum annual random drug testing rate for the period January 1, 2014...
23 CFR 1240.13 - Determination of national average seat belt use rate.
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Determination of national average seat belt use rate... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GUIDELINES SAFETY INCENTIVE GRANTS FOR USE OF SEAT BELTS-ALLOCATIONS BASED ON SEAT BELT USE RATES Determination of Allocations § 1240.13 Determination of national...
Croft, Stephen [Oak Ridge National Laboratory (ORNL), One Bethel Valley Road, Oak Ridge, TN (United States); Burr, Tom [International Atomic Energy Agency (IAEA), Vienna (Austria); Favalli, Andrea [Los Alamos National Laboratory (LANL), MS E540, Los Alamos, NM 87545 (United States); Nicholson, Andrew [Oak Ridge National Laboratory (ORNL), One Bethel Valley Road, Oak Ridge, TN (United States)
2016-03-01
The declared linear density of {sup 238}U and {sup 235}U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of {sup 235}U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to model the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. We find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters.
Totok R. Biyanto
2016-06-01
Full Text Available Safety Instrumented Function (SIF is implemented on the system to prevent hazard in process industry. In general, most of SIF implementation in process industry works in low demand condition. Safety valuation of SIF that works in low demand can be solved by using quantitative method. The quantitative method is a simplified exponential equation form of MacLaurin series, which can be called simplified equation. Simplified equation used in high demand condition will generate a higher Safety Integrity Level (SIL and it will affect the higher safety cost. Therefore, the value of low or high demand rate limit should be determined to prevent it. The result of this research is a first order equation that can fix the error of SIL, which arises from the usage of simplified equation, without looking the demand rate limit for low and high demand. This equation is applied for SIL determination on SIF with 1oo1 vote. The new equation from this research is λ = 0.9428 λMC + 1.062E−04 H/P, with 5% average of error, where λMC is a value of λ from the simplified equation, Hazardous event frequency (H is a probabilistic frequency of hazard event and P is Probability of Failure on Demand (PFD in Independent Protection Layers (IPLs. The equation generated from this research could correct SIL of SIF in various H and P. Therefore, SIL design problem could be solved and it provides an appropriate SIL.
Error estimation in plant growth analysis
Andrzej Gregorczyk
2014-01-01
Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.
Smith, C. J.; Kim, B.; Zhang, Y.; Ng, T. N.; Beck, V.; Ganguli, A.; Saha, B.; Daniel, G.; Lee, J.; Whiting, G.; Meyyappan, M.; Schwartz, D. E.
2015-12-01
We will present our progress on the development of a wireless sensor network that will determine the source and rate of detected methane leaks. The targeted leak detection threshold is 2 g/min with a rate estimation error of 20% and localization error of 1 m within an outdoor area of 100 m2. The network itself is composed of low-cost, high-performance sensor nodes based on printed nanomaterials with expected sensitivity below 1 ppmv methane. High sensitivity to methane is achieved by modifying high surface-area-to-volume-ratio single-walled carbon nanotubes (SWNTs) with materials that adsorb methane molecules. Because the modified SWNTs are not perfectly selective to methane, the sensor nodes contain arrays of variously-modified SWNTs to build diversity of response towards gases with adsorption affinity. Methane selectivity is achieved through advanced pattern-matching algorithms of the array's ensemble response. The system is low power and designed to operate for a year on a single small battery. The SWNT sensing elements consume only microwatts. The largest power consumer is the wireless communication, which provides robust, real-time measurement data. Methane leak localization and rate estimation will be performed by machine-learning algorithms built with the aid of computational fluid dynamics simulations of gas plume formation. This sensor system can be broadly applied at gas wells, distribution systems, refineries, and other downstream facilities. It also can be utilized for industrial and residential safety applications, and adapted to other gases and gas combinations.
Zinbarg, Richard E; Suzuki, Satoru; Uliaszek, Amanda A; Lewis, Alison R
2010-05-01
Miller and Chapman (2001) argued that 1 major class of misuse of analysis of covariance (ANCOVA) or its multiple regression counterpart, analysis of partial variance (APV), arises from attempts to use an ANCOVA/APV to answer a research question that is not meaningful in the 1st place. Unfortunately, there is another misuse of ANCOVAs/APVs that arises frequently in psychopathology studies even when addressing consensually meaningful research questions. This misuse arises from inflated Type I error rates in ANCOVA/APV inferential tests of the unique association of the independent variable with the dependent variable when the covariate and independent variables are correlated and measured with error. Alternatives to conventional ANCOVAs/APVs are discussed, as are steps that can be taken to minimize the impact of this bias on drawing valid inferences when conventional ANCOVAs/APVs are used.
47 CFR 65.700 - Determining the maximum allowable rate of return.
2010-10-01
... CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Maximum Allowable Rates of Return § 65.700 Determining the maximum allowable rate of return. (a) The maximum allowable rate of return for any exchange carrier's earnings on any access service category shall...
Beall, Jeffrey; Kafadar, Karen
2004-01-01
Typographical errors in bibliographic records can cause retrieval problems in online catalogs. This study examined one hundred typographical errors in records in the OCLC WorldCat database. The local catalogs of five libraries holding the items described by the bibliographic records with typographical errors were searched to determine whether each library had corrected the errors. The study found that only 35.8 percent of the errors had been corrected. Knowledge of copy cataloging error rates...
Dick, Josef
2010-01-01
We study numerical approximations of integrals $\\int_{[0,1]^s} f(\\bsx) \\,\\mathrm{d} \\bsx$ by averaging the function at some sampling points. Monte Carlo (MC) sampling yields a convergence of the root mean square error (RMSE) of order $N^{-1/2}$ (where $N$ is the number of samples). Quasi-Monte Carlo (QMC) sampling on the other hand achieves a convergence of order $N^{-1+\\varepsilon}$, for any $\\varepsilon >0$. Randomized QMC (RQMC), a combination of MC and QMC, achieves a RMSE of order $N^{-3/2+\\varepsilon}$. A combination of RQMC with local antithetic sampling achieves a convergence of the RMSE of order $N^{-3/2-1/s+\\varepsilon}$ (where $s \\ge 1$ is the dimension). QMC, RQMC and RQMC with local antithetic sampling require that the integrand has some smoothness (for instance, bounded variation). Stronger smoothness assumptions on the integrand do not improve the convergence of the above algorithms further. This paper introduces a new RQMC algorithm, for which we prove that it achieves a convergence of the RMS...
Petrova, Natalia; Kocoulin, Valerii; Nefediev, Yurii
2016-07-01
In the Kazan University computer simulation is carried out for observation of lunar physical libration in projects planned installation of measuring equipment on the lunar surface. One such project is the project of ILOM (Japan), in which on the lunar pole an optical telescope with CCD will be equipped. As a result, the determining the selenographic coordinates (x and y) of a star with an accuracy of 1 ms of arc will be achieved. On the basis of the analytical theory of physical libration we developed a technique for solving the inverse problem of the libration. And we have already shown, for example, that the error in determining selenographic coordinates about ɛ seconds does not lead to errors in the determination of the libration angles ρ and Iσ larger than the 1.414ɛ. Libration in longitude is not determined from observations of the polar star (Petrova et al., 2012). The accuracy of the libration in the inverse problem depends on accuracy of the coordinates of the stars - α and δ - taken from the star catalogs. Checking this influence is the task of the present study. To do simulation we have developed that allows to choose the stars, falling in the field of view of the lunar telescope on observation period. Equatorial coordinates of stars were chosen by us from several fundamental catalogs: UCAC2-BSS, Hipparcos, Tycho, FK6 (part I, III) and the Astronomical Almanac. An analysis of these catalogues from the point of view accuracy of coordinates of stars represented in them was performed by Nefediev et al., 2013. The largest error, 20-70 ms, found in the catalogues UCAC2 and Tycho, the others have an error about a millisecond of arc. We simulated the observations with mentioned errors and got the following results. 1. The error in the declination Δδ of the star causes the same order error in libration parameters ρ and Iσ , while the sensitivity of libration to errors in Δα is ten time smaller. Fortunately, due to statistics (30 to 70, depending on
What determines the sensitivity of the real exchange rate in Colombia to a terms of trade shock?
Parra-Alvarez, Juan Carlos; Mahadeva, Lavan
2012-01-01
We show that the sensitivity of the real exchange rate to terms of trade shocks is greater the lower the elasticity of final and derived demand between domestic and imported items. We develop a novel Kalman filter-based method to estimate these key parameters for Colombia, taking account of prefe......We show that the sensitivity of the real exchange rate to terms of trade shocks is greater the lower the elasticity of final and derived demand between domestic and imported items. We develop a novel Kalman filter-based method to estimate these key parameters for Colombia, taking account...... of preference shifts, technological relative price trends and errors in sectoral data. We find that the elasticity of the input of the distribution sector in transforming imports from domestic consumption reliably indicates complementarity, implying that rigidities in this sector matter in determining...
Beddo, M.E.; Spinka, H.; Underwood, D.G.
1992-08-14
Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.
Borot, Maxence; Denis de Senneville, B; Maenhout, M; Hautvast, G; Binnekamp, D; Lagendijk, J J W; van Vulpen, M; Moerland, M A
2016-01-01
The development of magnetic resonance (MR) guided high dose rate (HDR) brachytherapy for prostate cancer has gained increasing interest for delivering a high tumor dose safely in a single fraction. To support needle placement in the limited workspace inside the closed-bore MRI, a single-needle MR-co
EXCHANGE RATE DETERMINATION IN PAKISTAN: EVIDENCE BASED ON PURCHASING POWER PARITY THEORY
Khan, Muhammad Arshad; Qayyum, Abdul
2007-01-01
This paper presents the empirical evidence on purchasing power parity (PPP) for Pak-rupee vis-à-vis US-dollar exchange rate using Johansen (1988) and Johansen and Juselius (1990) multivariate cointegration and bound testing approach to cointegration (Pesaran et al., 2001) over the period 1982Q2-2005Q4. We find a considerable support for the existence of long-run PPP. Furthermore, the results of error-correction suggest that nominal exchange rate plays an important role in eliminating deviatio...
Cable Modems' Transmitted RF: A Study of SNR, Error Rates, Transmit Levels, and Trouble Call Metrics
Tebbetts, Jo A.
2013-01-01
Hypotheses were developed and tested to measure the cable modems operational metrics response to a reconfiguration of the cable modems' transmitted RF applied to the CMTS. The purpose of this experiment was to compare two groups on the use of non-federal RF spectrum to determine if configuring the cable modems' transmitted RF from 25.2…
Chadwick, Liam
2012-03-12
Health Care Failure Modes and Effects Analysis (HFMEA®) is an established tool for risk assessment in health care. A number of deficiencies have been identified in the method. A new method called Systems and Error Analysis Bundle for Health Care (SEABH) was developed to address these deficiencies. SEABH has been applied to a number of medical processes as part of its validation and testing. One of these, Low Dose Rate (LDR) prostate Brachytherapy is reported in this paper. The case study supported the validity of SEABH with respect to its capacity to address the weaknesses of (HFMEA®).
Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso
2010-01-01
We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self......-heterodyne receivers perform down-conversion of radio frequency (RF) subcarrier signal. A theoretical model including noise analysis is constructed to calculate the Q factor and estimate the BER performance. Furthermore, we experimentally validate our prediction in the theoretical modeling. Both the experimental...
Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso
2010-01-01
-heterodyne receivers perform down-conversion of radio frequency (RF) subcarrier signal. A theoretical model including noise analysis is constructed to calculate the Q factor and estimate the BER performance. Furthermore, we experimentally validate our prediction in the theoretical modeling. Both the experimental......We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self...
Zöhrer, Evelyn; Fischler, Björn; D'Antiga, Lorenzo; Debray, Dominique; Dezsofi, Antal; Haas, Dorothea; Hadzic, Nedim; Jacquemin, Emmanuel; Lamireau, Thierry; Maggiore, Giuseppe; McKiernan, Pat J; Calvo, Pier Luigi; Verkade, Henkjan J; Hierro, Loreto; McLin, Valerie; Baumann, Ulrich; Gonzales, Emmanuel
2017-01-01
OBJECTIVE: Inborn errors of primary bile acid (BA) synthesis are genetic cholestatic disorders leading to accumulation of atypical BA with deficiency of normal BA. Unless treated with primary BA, chronic liver disease usually progresses to cirrhosis and liver failure before adulthood. We sought to d
Arthur W Pightling
Full Text Available The wide availability of whole-genome sequencing (WGS and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i depth of sequencing coverage, ii choice of reference-guided short-read sequence assembler, iii choice of reference genome, and iv whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT, using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming. We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers
23 CFR Appendix D to Part 1240 - Determination of National Average Seat Belt Use Rate
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Determination of National Average Seat Belt Use Rate D... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GUIDELINES SAFETY INCENTIVE GRANTS FOR USE OF SEAT BELTS-ALLOCATIONS BASED ON SEAT BELT USE RATES Pt. 1240, App. D Appendix D to Part 1240—Determination of National...
7 CFR 1610.10 - Determination of interest rate on Bank loans.
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Determination of interest rate on Bank loans. 1610.10..., DEPARTMENT OF AGRICULTURE LOAN POLICIES § 1610.10 Determination of interest rate on Bank loans. (a) All loan fund advances made on or after December 22, 1987 under Bank loans approved on or after October 1,...
26 CFR 1.430(h)(2)-1 - Interest rates used to determine present value.
2010-04-01
... 26 Internal Revenue 5 2010-04-01 2010-04-01 false Interest rates used to determine present value... rates used to determine present value. (a) In general—(1) Overview. This section provides rules relating... present value of the benefits that are included in the target normal cost and the funding target for...
An exchange rate determination model for central banks＇interventions in financial markets
林浚清; 黄祖辉; 战明华
2002-01-01
We establish an exchange rate determination model for central banks' interventiorm in financial markets.The model shows that central banks can adjuct exchange rate by several policy instruments and that different instruments may have different effects on exchange rate determination.It specifies potetial policy instruments for central banks as well as their policy effects.Based on these effects,feasible matches of policy instruments in contingent intervention are put forth.
An exchange rate determination model for central banks' interventions in financial markets
林浚清; 黄祖辉; 战明华
2002-01-01
We establish an exchange rate determination model for central banks' in terventions in financial markets. The model shows that central banks can adjust exchange rate by several policy instruments and that different instruments may h ave different effects on exchange rate determination. It specifies potential pol icy instruments for central banks as well as their policy effects. Based on thes e effects, feasible matches of policy instruments in contingent intervention are put forth.
Soury, Hamza
2017-03-14
This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have non-line of sight (NLOS) interfering link. Consequently, we study the interferer limited problem that appears between NLOS HD users-pair that are scheduled on the same FD channel. The distribution of the interference is first characterized via its distribution function, which is derived in closed form. Then, a comprehensive performance assessment for the proposed pairing scheme is provided by assuming Extended Generalized-
Anastasio, D. J.; Kodama, K. P.; Hinnov, L.; Pares, J. M.
2015-12-01
High-resolution rock magnetic cyclostratigraphy in growth strata are used to reconstruct unsteady fault-related folding rates at the regional Pico del Aguila anticline. Published and new magnetobiostratigraphy was used to determine absolute time and to calibrate a cyclostratigraphy based on anhysteretic remanent magnetization (ARM) intensity variations. The ARM data series was tuned to the orbital eccentricity model to remove the effects of sedimentary rate changes between late Lutetian-middle Priabonian chron boundaries (C19-C15) during syntectonic deposition. Sediment accumulation rates increased up section to 1.15 m/kyr during delta progradation with large oscillations in sedimentation rate in phase with eccentricity cyclicity. The ARM data was a proxy for climate change by recording Milankovitch cyclicity in the detrital magnetite concentration of deposits resulting from runoff variability in the wedge-top basin. Incremental tilting rates were calculated between selected growth horizons over 5 myrs of fold growth. The Eocene age limb tilt began rapidly then decreased more slowly at variable rates that were punctuated by periods of tectonic quiescence. Calculated folding rates varied between 0˚ and 100 ˚/myrs and averaged 14 ˚/myr over 100 kyr time increments. Accuracy and precision in the rate calculations include spatial errors associated with outcrop reconstruction and down plunge projection (<10 m and 104 yrs), bedding attitude (few ˚), absolute chron ages (105 yrs), sample spacing (103 yrs), sample size (102 yrs), and orbital tuning (104 yrs). The absolute age resolution on deformation is a few 100 kyrs while the uncertainties in the relative time between growth horizons is less and estimated at ~20 kyrs. Variation in folding rates of the Pico del Aguila anticline is attributed to unsteady thrusting in the fold's core.
The normal range and determinants of the intrinsic heart rate in man.
Opthof, T
2000-01-01
Jose and Collison published a study on the normal range and the determinants of intrinsic heart rate in man in Cardiovascular Research in 1970 [Jose AD, Collison D. The normal range and determinants of the intrinsic heart rate in man. Cardiovasc Res 1970; 4: 160-167)]. The intrinsic heart rate is the heart rate under complete pharmacological blockade. They showed that (i) the resting heart rate is lower than the intrinsic heart rate and that (ii) the intrinsic heart rate declines with age. They also established that the variability in intrinsic heart rate between individuals of the same age is of the same order as the effect of ageing at the population level. This update discusses the relevance of these data with emphasis on sinus node function and autonomic balance. The paper of Jose and Collison was cited more than 200 times. The frequency of citation started to increase more than 10 years after publication.
Determination of growth rates as an input of the stock discount valuation models
Momčilović Mirela
2013-01-01
Full Text Available When determining the value of the stocks with different stock discount valuation models, one of the important inputs is expected growth rate of dividends, earnings, cash flows and other relevant parameters of the company. Growth rate can be determined by three basic ways, and those are: on the basis of extrapolation of historical data, on the basis of professional assessment of the analytics who follow business of the company and on the basis of fundamental indicators of the company. Aim of this paper is to depict theoretical basis and practical application of stated methods for growth rate determination, and to indicate their advantages, or deficiencies.
Methane combustion kinetic rate constants determination: an ill-posed inverse problem analysis
Bárbara D. L. Ferreira
2013-01-01
Full Text Available Methane combustion was studied by the Westbrook and Dryer model. This well-established simplified mechanism is very useful in combustion science, for computational effort can be notably reduced. In the inversion procedure to be studied, rate constants are obtained from [CO] concentration data. However, when inherent experimental errors in chemical concentrations are considered, an ill-conditioned inverse problem must be solved for which appropriate mathematical algorithms are needed. A recurrent neural network was chosen due to its numerical stability and robustness. The proposed methodology was compared against Simplex and Levenberg-Marquardt, the most used methods for optimization problems.
Harju, Jarkko; Vehkaoja, Antti; Lindroos, Ville; Kumpulainen, Pekka; Liuhanen, Sasu; Yli-Hankala, Arvi; Oksala, Niku
2016-10-17
Alterations in arterial blood oxygen saturation, heart rate (HR), and respiratory rate (RR) are strongly associated with intra-hospital cardiac arrests and resuscitations. A wireless, easy-to-use, and comfortable method for monitoring these important clinical signs would be highly useful. We investigated whether the Nellcor™ OxiMask MAX-FAST forehead sensor could provide data for vital sign measurements when located at the distal forearm instead of its intended location at the forehead to provide improved comfortability and easy placement. In a prospective setting, we recruited 30 patients undergoing surgery requiring postoperative care. At the postoperative care unit, patients were monitored for two hours using a standard patient monitor and with a study device equipped with a Nellcor™ Forehead SpO2 sensor. The readings were electronically recorded and compared in post hoc analysis using Bland-Altman plots, Spearman's correlation, and root-mean-square error (RMSE). Bland-Altman plot showed that saturation (SpO2) differed by a mean of -0.2 % points (SD, 4.6), with a patient-weighted Spearman's correlation (r) of 0.142, and an RMSE of 4.2 points. For HR measurements, the mean difference was 0.6 bpm (SD, 2.5), r = 0.997, and RMSE = 1.8. For RR, the mean difference was -0.5 1/min (4.1), r = 0.586, and RMSE = 4.0. The SpO2 readings showed a low mean difference, but also a low correlation and high RMSE, indicating that the Nellcor™ saturation sensor cannot reliably assess oxygen saturation at the forearm when compared to finger PPG measurements.
Evaluation of drug administration errors in a teaching hospital.
Berdot, Sarah; Sabatier, Brigitte; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre
2012-03-12
Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.
Evaluation of drug administration errors in a teaching hospital
Berdot Sarah
2012-03-01
Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.
Ana Carolina Souza-Oliveira
Full Text Available Abstract Ventilator-associated pneumonia is the most prevalent nosocomial infection in intensive care units and is associated with high mortality rates (14–70%. Aim This study evaluated factors influencing mortality of patients with Ventilator-associated pneumonia (VAP, including bacterial resistance, prescription errors, and de-escalation of antibiotic therapy. Methods This retrospective study included 120 cases of Ventilator-associated pneumonia admitted to the adult adult intensive care unit of the Federal University of Uberlândia. The chi-square test was used to compare qualitative variables. Student's t-test was used for quantitative variables and multiple logistic regression analysis to identify independent predictors of mortality. Findings De-escalation of antibiotic therapy and resistant bacteria did not influence mortality. Mortality was 4 times and 3 times higher, respectively, in patients who received an inappropriate antibiotic loading dose and in patients whose antibiotic dose was not adjusted for renal function. Multiple logistic regression analysis revealed the incorrect adjustment for renal function was the only independent factor associated with increased mortality. Conclusion Prescription errors influenced mortality of patients with Ventilator-associated pneumonia, underscoring the challenge of proper Ventilator-associated pneumonia treatment, which requires continuous reevaluation to ensure that clinical response to therapy meets expectations.
Method and device for determining the knock rating of motor fuels
Kjuregyan, S.K.; Kazarian, S.A.; Dovlatov, I.A.
1996-03-14
The invention relates to the oil refining and petrochemical industry and concerns in particular techniques for determining the knock rating of motor fuels. The proposed method of determining the knock rating of motor fuels involves the thermostatic control of a reaction vessel of constant volume, feeding the fuel-air mixture into the vessel, atomization of the mixture under excess pressure, and ignition of the mixture. The knock rating is indicated by the intensity of the knocking which in turn is determined from the magnitude of the signal from a knocking sensor in the reaction vessel. The proposed device for determining the knock rating of motor fuels comprises a reaction vessel whith inlet and outlet valves and provided with means of thermostatic control. A spark plug is provided in the reaction vessel and a knock sensor is arranged opposite the spark plug. (author)
There has been an increasing use of both solid metal and microfabricated iridium electrodes as substrates for various types of electroanalysis. However, investigations to determine heterogeneous electron transfer rate constants on iridium, especially at an electron beam evapor...
There has been an increasing use of both solid metal and microfabricated iridium electrodes as substrates for various types of electroanalysis. However, investigations to determine heterogeneous electron transfer rate constants on iridium, especially at an electron beam evapor...
Trenk, Lisa; Kuhl, Juliane; Aurich, Jörg; Aurich, Christine; Nagel, Christina
2015-11-01
In this study, fetomaternal electrocardiograms were recorded once weekly in cattle during the last 14 weeks of gestation. From the recorded beat-to-beat (RR) intervals, heart rate and heart rate variability (HRV) variables standard deviation of the RR interval (SDRR) and root mean square of successive RR differences (RMSSD) were calculated. To differentiate between effects of lactation and gestation, pregnant lactating (PL) cows (n = 7) and pregnant nonlactating (PNL) heifers (n = 8) were included. We hypothesized that lactation is associated with stress detectable by HRV analysis. We also followed the hypothesis that heart rate and HRV are influenced by growth and maturation of the fetus toward term. Maternal heart rate changed over time in both groups, and in PL cows, it decreased with drying-off. During the last 5 weeks of gestation, maternal heart rate increased in both groups but was lower in PL cows than in PNL heifers. Maternal HRV did not change over time, but SDRR was significantly higher in PL cows than in PNL heifers, and significant interactions of group × time existed. On the basis of HRV, undisturbed pregnancies are thus no stressor for the dam in cattle. Fetal heart rate decreased from week 14 to week 1 before birth with no difference between groups. Gestational age thus determines heart rate in the bovine fetus. The HRV variables SDRR and RMSSD increased toward the end of gestation in fetuses carried by cows but not in those carried by heifers. The increase in HRV indicates maturation of fetal cardiac regulation which may be overrun by high sympathoadrenal activity in fetuses carried by heifers as suggested by their low HRV.
Determinants of the ZAR/USD exchange rate and policy implications: A simultaneous-equation model
Yu Hsing
2016-12-01
Full Text Available This paper examines the determinants of the South African rand/US dollar (ZAR/USD exchange rate based on demand and supply analysis. Applying the EGARCH method, the paper finds that the ZAR/USD exchange rate is positively associated with the South African government bond yield, US real GDP, the US stock price and the South African inflation rate and negatively influenced by the 10-year US government bond yield, South African real GDP, the South African stock price, and the US inflation rate. The adoption of a free floating exchange rate regime has reduced the value of the rand vs. the US dollar.
Mihaela SIMIONESCU
2016-03-01
Full Text Available Inflation rate determinants for the USA have been analyzed in this study starting with 2008, when the American economy was already in crisis. This research brings, as a novelty, the use of Bayesian Econometrics methods to identify the monthly inflation rate in the USA. The Stochastic Search Variable Selection (SSVS has been applied for a subjective probability acceptance of 0.3. The results are validated also by economic theory. The monthly inflation rate was influenced during 2008-2015 by: the unemployment rate, the exchange rate, crude oil prices, the trade weighted U.S. Dollar Index and the M2 Money Stock. The study might be continued by considering other potential determinants of the inflation rate.
B. Verheggen
2006-01-01
Full Text Available Classical nucleation theory is unable to explain the ubiquity of nucleation events observed in the atmosphere. This shows a need for an empirical determination of the nucleation rate. Here we present a novel inverse modeling procedure to determine particle nucleation and growth rates based on consecutive measurements of the aerosol size distribution. The particle growth rate is determined by regression analysis of the measured change in the aerosol size distribution over time, taking into account the effects of processes such as coagulation, deposition and/or dilution. This allows the growth rate to be determined with a higher time-resolution than can be deduced from inspecting contour plots ('banana-plots''. Knowing the growth rate as a function of time enables the evaluation of the time of nucleation of measured particles of a certain size. The nucleation rate is then obtained by integrating the particle losses from time of measurement to time of nucleation. The regression analysis can also be used to determine or verify the optimum value of other parameters of interest, such as the wall loss or coagulation rate constants. As an example, the method is applied to smog chamber measurements. This program offers a powerful interpretive tool to study empirical aerosol population dynamics in general, and nucleation and growth in particular.
A Characterization of Prediction Errors
Meek, Christopher
2016-01-01
Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.
49 CFR 219.602 - FRA Administrator's determination of random drug testing rate.
2010-10-01
... Random Alcohol and Drug Testing Programs § 219.602 FRA Administrator's determination of random drug... percentage rate for random drug testing must be 50 percent of covered employees. (b) The FRA Administrator's decision to increase or decrease the minimum annual percentage rate for random drug testing is based on the...
75 FR 79308 - Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2011
2010-12-20
... Federal Railroad Administration 49 CFR Part 219 Alcohol and Drug Testing: Determination of Minimum Random... rail industry random testing positive rates were .037 percent for drugs and .014 percent for alcohol. Because the industry-wide random drug testing positive rate has remained below 1.0 percent for the last...
2013-11-18
... 2014 Railroad Experience Rating Proclamations, Monthly Compensation Base and Other Determinations... experience-based employer contribution rates for the following year. The RRB is further required by section 8... under the Act cannot be considered subsidiary remuneration if the employee's base year compensation...
5 CFR 536.206 - Determining an employee's rate of basic pay under grade retention.
2010-01-01
... employee's rate of basic pay under grade retention. (a) General. (1) When an employee becomes entitled to... conversion), the employee would be eligible for pay retention under subpart C of this part to the same extent... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Determining an employee's rate of basic...
40 CFR 75.36 - Missing data procedures for heat input rate determinations.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Missing data procedures for heat input... (CONTINUED) AIR PROGRAMS (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.36 Missing data procedures for heat input rate determinations. (a) When hourly heat input rate...
Jongberg, Sisse; Lund, Marianne Nissen; Pattison, David I.
2016-01-01
. This approach allows determination of apparent rate constants for the oxidation of proteins by haem proteins of relevance to food oxidation and should be applicable to other systems. A similar approach has provided approximate apparent rate constants for the reduction of MbFe(IV)=O by catechin and green tea...
Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken
2011-04-01
A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.
Backtracking and error correction in DNA transcription
Voliotis, Margaritis; Cohen, Netta; Molina-Paris, Carmen; Liverpool, Tanniemola
2008-03-01
Genetic information is encoded in the nucleotide sequence of the DNA. This sequence contains the instruction code of the cell - determining protein structure and function, and hence cell function and fate. The viability and endurance of organisms crucially depend on the fidelity with which genetic information is transcribed/translated (during mRNA and protein production) and replicated (during DNA replication). However, thermodynamics introduces significant fluctuations which would incur massive error rates if efficient proofreading mechanisms were not in place. Here, we examine a putative mechanism for error correction during DNA transcription, which relies on backtracking of the RNA polymerase (RNAP). We develop an error correction model that incorporates RNAP translocation, backtracking pauses and mRNA cleavage. We calculate the error rate as a function of the relevant rates (translocation, cleavage, backtracking and polymerization) and show that the its theoretical limit is equivalent to that accomplished by a multiple-step kinetic proofreading mechanism.
77 FR 75896 - Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2013
2012-12-26
... Federal Railroad Administration 49 CFR Part 219 Alcohol and Drug Testing: Determination of Minimum Random.... According to data from FRA's Management Information System, the rail industry's random drug testing positive... (Administrator) has therefore determined that the minimum annual random drug testing rate for the period January...
76 FR 80781 - Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2012
2011-12-27
... Federal Railroad Administration 49 CFR Part 219 RIN 2130-AA81 Alcohol and Drug Testing: Determination of... random drug testing ] positive rate has remained below 1.0 percent for the last two years. The Federal Railroad Administrator (Administrator) has therefore determined that the minimum annual random drug testing...
75 FR 1547 - Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2010
2010-01-12
... Federal Railroad Administration 49 CFR Part 219 RIN 2130-AA81 Alcohol and Drug Testing: Determination of... percent for drugs and 0.15 percent for alcohol. Because the industry-wide random drug testing positive... (Administrator) has determined that the minimum annual random drug testing rate for the period January 1, 2010...
Tacken, M.; Braspenning, J.; Spreeuwenberg, P.; Hoogen, H. van den; Essen, G. van; Bakker, D. de; Grol, R.
2002-01-01
BACKGROUND: World-wide each year 30-55% of the target population is vaccinated against influenza. Determinants of successful vaccination programs are not clear. This study was aimed at identifying practice- and patient-related factors that determine differences in vaccination rates. METHODS: Data on
Vrieze, Scott I; Grove, William M
2008-06-01
The authors demonstrate a statistical bootstrapping method for obtaining unbiased item selection and predictive validity estimates from a scale development sample, using data (N = 256) of Epperson et al. [2003 Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) technical paper: Development, validation, and recommended risk level cut scores. Retrieved November 18, 2006 from Iowa State University Department of Psychology web site: http://www.psychology.iastate.edu/ approximately dle/mnsost_download.htm] from which the Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) was developed. Validity (area under receiver operating characteristic curve) reported by Epperson et al. was .77 with 16 items selected. The present analysis yielded an asymptotically unbiased estimator AUC = .58. The present article also focused on the degree to which sampling error renders estimated cutting scores (appropriate to local [varying] recidivism base rates) nonoptimal, so that the long-run performance (measured by correct fraction, the total proportion of correct classifications) of these estimated cutting scores is poor, when they are applied to their parent populations (having assumed values for AUC and recidivism rate). This was investigated by Monte Carlo simulation over a range of AUC and recidivism rate values. Results indicate that, except for the AUC values higher than have ever been cross-validated, in combination with recidivism base rates severalfold higher than the literature average [Hanson and Morton-Bourgon, 2004, Predictors of sexual recidivism: An updated meta-analysis. (User report 2004-02.). Ottawa: Public Safety and Emergency Preparedness Canada], the user of an instrument similar in performance to the MnSOST-R cannot expect to achieve correct fraction performance notably in excess of what is achievable from knowing the population recidivism rate alone. The authors discuss the legal implications of their findings for procedural and substantive due process in
Nowak, Paweł Mateusz; Woźniakiewicz, Michał; Kościelniak, Paweł
2015-12-01
It is commonly accepted that the modern CE instruments equipped with efficient cooling system enable accurate determination of electrophoretic or electroosmotic mobilities. It is also often assumed that velocity of migration in a given buffer is constant throughout the capillary length. It is simultaneously neglected that the noncooled parts of capillary produce extensive Joule heating leading to an axial electric field distortion, which contributes to a difference between the effective and nominal electric field potentials and between velocities in the cooled and noncooled parts of capillary. This simplification introduces systematic errors, which so far were however not investigated experimentally. There was also no method proposed for their elimination. We show a simple and fast method allowing for estimation and elimination of these errors that is based on combination of a long-end and short-end injections. We use it to study the effects caused by variation of temperature, electric field, capillary length, and pH.
孙宇[; 李纯莲; 钟经华
2016-01-01
Braille error tolerance rate includes two aspects: the scheme error tolerance rate corresponding to Braille scheme and the spelling error tolerance rate corresponding to readers.In order to reasonably evaluate the spelling efficiency of Chinese Braille scheme and further improve it, this paper presents a concept of scheme error tolerance rate and makes a statistical analysis on it.The results show that the error tolerance rate is objective necessary and controllable, pointing out the Braille scheme with the greater error tolerance rate will be easier to use and popularize.Finally, it gives an optimization function of scheme error tolerance rate, which is helpful to improve the current Braille scheme.Meanwhile, it discusses the influences of readers′psychological factors on Braille error tolerance rate when reading and reveals the relations of mutual influence, mutual promotion and mutual compensation between the scheme error tolerance rate of Braille scheme and the spelling error tolerance rate of Braille readers.%盲文容错率包括盲文方案的方案容错率和盲文读者的拼读容错率两个方面。为了合理评估汉语盲文方案的拼读效率、进一步改进盲文方案，提出盲文方案的方案容错率概念并对其进行统计学分析，得出容错率存在的必然性和可控性，指出容错率较大的盲文方案较容易使用和推广，最后给出了盲文方案容错率的优化函数以利于改进现有盲文方案。同时还分析了读者在阅读盲文时，其心理因素对盲文容错率的影响，揭示了盲文方案的方案容错率和盲文读者的拼读容错率之间相互影响、相互促进、相互代偿的关系。
A numerical method for determining the strain rate intensity factor under plane strain conditions
Alexandrov, S.; Kuo, C.-Y.; Jeng, Y.-R.
2016-07-01
Using the classical model of rigid perfectly plastic solids, the strain rate intensity factor has been previously introduced as the coefficient of the leading singular term in a series expansion of the equivalent strain rate in the vicinity of maximum friction surfaces. Since then, many strain rate intensity factors have been determined by means of analytical and semi-analytical solutions. However, no attempt has been made to develop a numerical method for calculating the strain rate intensity factor. This paper presents such a method for planar flow. The method is based on the theory of characteristics. First, the strain rate intensity factor is derived in characteristic coordinates. Then, a standard numerical slip-line technique is supplemented with a procedure to calculate the strain rate intensity factor. The distribution of the strain rate intensity factor along the friction surface in compression of a layer between two parallel plates is determined. A high accuracy of this numerical solution for the strain rate intensity factor is confirmed by comparison with an analytic solution. It is shown that the distribution of the strain rate intensity factor is in general discontinuous.
[Determination of plasma protein binding rate of arctiin and arctigenin with ultrafiltration].
Han, Xue-Ying; Wang, Wei; Tan, Ri-Qiu; Dou, De-Qiang
2013-02-01
To determine the plasma protein binding rate of arctiin and arctigenin. The ultrafiltration combined with HPLC was employed to determine the plasma protein binding rate of arctiin and arctigenin as well as rat plasma and healthy human plasma proteins. The plasma protein binding rate of arctiin with rat plasma at the concentrations of 64. 29, 32.14, 16.07 mg x L(-1) were (71.2 +/- 2.0)%, (73.4 +/- 0.61)%, (78.2 +/- 1.9)%, respectively; while the plasma protein binding rate of arctiin with healthy human plasma at the above concentrations were (64.8 +/- 3.1)%, (64.5 +/- 2.5)%, (77.5 +/- 1.7)%, respectively. The plasma protein binding rate of arctigenin with rat plasma at the concentrations of 77.42, 38.71, 19.36 mg x L(-1) were (96.7 +/- 0.41)%, (96.8 +/- 1.6)%, (97.3 +/- 0.46)%, respectively; while the plasma protein binding rate of arctigenin with normal human plasma at the above concentrations were (94.7 +/- 3.1)%, (96.8 +/- 1.6)%, (97.9 +/- 1.3)%, respectively. The binding rate of arctiin with rat plasma protein was moderate, which is slightly higher than the binding rate of arctiin with healthy human plasma protein. The plasma protein binding rates of arctigenin with both rat plasma and healthy human plasma are very high.
Development of An Optimization Method for Determining Automation Rate in Nuclear Power Plants
Lee, Seung Min; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Kim, Jong Hyun [KEPCO, Ulsan (Korea, Republic of)
2014-08-15
Since automation was introduced in various industrial fields, it has been known that automation provides positive effects like greater efficiency and fewer human errors, and negative effect defined as out-of-the-loop (OOTL). Thus, before introducing automation in nuclear field, the estimation of the positive and negative effects of automation on human operators should be conducted. In this paper, by focusing on CPS, the optimization method to find an appropriate proportion of automation is suggested by integrating the suggested cognitive automation rate and the concepts of the level of ostracism. The cognitive automation rate estimation method was suggested to express the reduced amount of human cognitive loads, and the level of ostracism was suggested to express the difficulty in obtaining information from the automation system and increased uncertainty of human operators' diagnosis. The maximized proportion of automation that maintains the high level of attention for monitoring the situation is derived by an experiment, and the automation rate is estimated by the suggested automation rate estimation method. It is expected to derive an appropriate inclusion proportion of the automation system avoiding the OOTL problem and having maximum efficacy at the same time.
American Society for Testing and Materials. Philadelphia
2008-01-01
1.1 The purpose of this test method is to define a general procedure for determining an unknown thermal-neutron fluence rate by neutron activation techniques. It is not practicable to describe completely a technique applicable to the large number of experimental situations that require the measurement of a thermal-neutron fluence rate. Therefore, this method is presented so that the user may adapt to his particular situation the fundamental procedures of the following techniques. 1.1.1 Radiometric counting technique using pure cobalt, pure gold, pure indium, cobalt-aluminum, alloy, gold-aluminum alloy, or indium-aluminum alloy. 1.1.2 Standard comparison technique using pure gold, or gold-aluminum alloy, and 1.1.3 Secondary standard comparison techniques using pure indium, indium-aluminum alloy, pure dysprosium, or dysprosium-aluminum alloy. 1.2 The techniques presented are limited to measurements at room temperatures. However, special problems when making thermal-neutron fluence rate measurements in high-...
Differentiated Bit Error Rate Estimation for Wireless Networks%无线网络的差异化比特错误率估计方法
张招亮; 陈海明; 黄庭培; 崔莉
2014-01-01
在无线网络中,比特错误率(bit error rate,BER)的估计是许多上层协议的基础,对数据传输的性能具有重要的影响,目前已成为一个重要的研究课题.但是现有BER估计编码未考虑实际网络的BER分布特征,估计误差较大.在实测分析802.11无线网络的BER分布特征的基础上,提出了一种采用差异化思想来提高BER估计准确度的方法差异化估错码(differentiated error estimation,DEE),其主要思想是在数据包中插入具有不同估错能力的多级估错位,并随机均匀地分布各估错位.然后,借助BER与奇偶校验错误概率的理论关系来估计BER.此外,DEE利用BER非均匀分布特征来优化各级估错位的能力,提高出现概率较高的BER的估计准确度,以降低平均估计误差.在7个节点组成的测试床上评价了DEE的性能.实验结果表明,与最近的研究成果估错码(error estimation code,EEC)相比,DEE可将估计误差平均减少约44％.当估错冗余较低时DEE可将估计误差减少约68％.此外,DEE具有比EEC更小的估计偏差.
Kontkanen, Jenni; Olenius, Tinja; Lehtipalo, Katrianne; Vehkamäki, Hanna; Kulmala, Markku
2015-04-01
The probability of freshly formed particles to survive to climatically relevant sizes is determined by the competition between the coagulation loss rate and the particle growth rate. Therefore, various methods have been developed to deduce the growth rates from measured particle size distributions. Recently, the growth rates of sub-3nm clusters have been determined based on the appearance times of different cluster sizes. However, it is not clear to what extent these growth rates are consistent with the growth rates corresponding to molecular fluxes between clusters. In this work, we simulated the time evolution of a population of sub-3 nm molecular clusters and compared the growth rates determined (1) from the cluster appearance times and (2) from the collision-evaporation fluxes between different cluster sizes. We performed a number of simulations by varying the ambient conditions and the properties of the model substance. In the first simulation set, the Gibbs free energy of the formation of the clusters was assumed to have a single maximum and no minima, corresponding to a monotonically increasing stability as a function of cluster size. The saturation vapor pressure was selected so that the growth proceeded solely via monomer additions. The growth rates were determined separately for each cluster. However, to see the effect of finite size resolution, we also performed simulations where the clusters were grouped into size bins, for which we determined the growth rates. In the second simulation set, the saturation vapor pressure was lowered so that the collisions of small clusters significantly contributed to the growth. As the growth rate of a single cluster is ambiguous in this case, the growth rates were determined only for different size bins. We performed simulations using a similar free energy profile as in other simulations but we also used a free energy profile containing a local minimum, corresponding to small stable clusters. Our simulations show that
Ibrahim A.Z. Qatawneh
2005-01-01
Full Text Available Digital communications systems use Multi tone Channel (MC transmission techniques with differentially encoded and differentially coherent demodulation. Today there are two principle MC application, one is for the high speed digital subscriber loop and the other is for the broadcasting of digital audio and video signals. In this study the comparison of multi carriers with OQPSK and Offset 16 QAM for high-bit rate wireless applications are considered. The comparison of Bit Error Rate (BER performance of Multi tone Channel (MC with offset quadrature amplitude modulation (Offset 16 QAM and offset quadrature phase shift keying modulation (OQPSK with guard interval in a fading environment is considered via the use of Monte Carlo simulation methods. BER results are presented for Offset 16 QAM using guard interval to immune the multi path delay for frequency Rayleigh fading channels and for two-path fading channels in the presence of Additive White Gaussian Noise (AWGN. The BER results are presented for Multi tone Channel (MC with differentially Encoded offset 16 Quadrature Amplitude Modulation (offset 16 QAM and MC with differentially Encoded offset quadrature phase shift keying modulation (OQPSK using guard interval for frequency flat Rician channel in the presence of Additive White Gaussian Noise (AWGN. The performance of multitone systems is also compared with equivalent differentially Encoded offset quadrature amplitude modulation (Offset 16 QAM and differentially Encoded offset quadrature phase shift keying modulation (OQPSKwith and without guard interval in the same fading environment.
Luey, J.; Li, S.W.
1993-04-01
Testing was initiated in March 1991 and completed in November 1992 to determine the rate at which asphalt is biodegraded by microorganisms native to the Hanford Site soils. The asphalt tested (AR-6000, US Oil, Tacoma, Washington) is to be used in the construction of a diffusion barrier for the Hanford grout vaults. Experiments to determine asphalt biodegradation rates were conducted using three separate test sets. These test sets were initiated in March 1991, January 1992, and June 1992 and ran for periods of 6 months, 11 months, and 6 months, respectively. The experimental method used was one originally developed by Bartha and Pramer (1965), and further refined by Bowerman et al. (1985), that determined the asphalt biodegradation rate through the measurement of carbon dioxide evolved.
Laboratory-Scale Melter for Determination of Melting Rate of Waste Glass Feeds
Kim, Dong-Sang; Schweiger, Michael J.; Buchmiller, William C.; Matyas, Josef
2012-01-09
The purpose of this study was to develop the laboratory-scale melter (LSM) as a quick and inexpensive method to determine the processing rate of various waste glass slurry feeds. The LSM uses a 3 or 4 in. diameter-fused quartz crucible with feed and off-gas ports on top. This LSM setup allows cold-cap formation above the molten glass to be directly monitored to obtain a steady-state melting rate of the waste glass feeds. The melting rate data from extensive scaled-melter tests with Hanford Site high-level wastes performed for the Hanford Tank Waste Treatment and Immobilization Plant have been compiled. Preliminary empirical model that expresses the melting rate as a function of bubbling rate and glass yield were developed from the compiled database. The two waste glass feeds with most melter run data were selected for detailed evaluation and model development and for the LSM tests so the melting rates obtained from LSM tests can be compared with those from scaled-melter tests. The present LSM results suggest the LSM setup can be used to determine the glass production rates for the development of new glass compositions or feed makeups that are designed to increase the processing rate of the slurry feeds.
Jamison, David Kay
2016-04-12
A charge/discharge input is for respectively supplying charge to, or drawing charge from, an electrochemical cell. A transition modifying circuit is coupled between the charge/discharge input and a terminal of the electrochemical cell and includes at least one of an inductive constituent, a capacitive constituent and a resistive constituent selected to generate an adjusted transition rate on the terminal sufficient to reduce degradation of a charge capacity characteristic of the electrochemical cell. A method determines characteristics of the transition modifying circuit. A degradation characteristic of the electrochemical cell is analyzed relative to a transition rate of the charge/discharge input applied to the electrochemical cell. An adjusted transition rate is determined for a signal to be applied to the electrochemical cell that will reduce the degradation characteristic. At least one of an inductance, a capacitance, and a resistance is selected for the transition modifying circuit to achieve the adjusted transition rate.
Study of Rock＇s Erosion Rate Based on the Determination of 36Cl by AMS
WANGYue; WUShao-yong; GUANYong-jing; LIUCun-fu; WUWei-ming; JIANGShan
2003-01-01
It is very important for science and economy to study the erosion rate of geology. The advanced technology and method may be extended to study the flourishing of the earth's crust and date. Since the measurement of cosmogenic nuclide became reality by AMS, the cosmogenic nuclide becomes more important in geoscience. So the determination of rock's erosion rate with 36Cl by AMS was going on.
IN TURNING ON-LINE DETERMINATION OF CUTTING TOOL WEAR RATE BY MEASURING CUTTING TEMPERATURE
Murat KIYAK
1999-01-01
Full Text Available The improvements of adaptive control and computer aided manufacturing need to be sensitively determined tool wear rate during machining. Reserarchers working on this topic have developed direct and indirect methods. To determine tool wear rate during machining by measuring of cutting temperature is an indirect method. In this study, two measuring techniques, work-tool thermocouple technique and a method that is realized by using a thermocouple assambed by embeding in the tool have been used. Both of them have been tested by obtaining tool wear using hard metal insert and turning mild steel and compared one with the other using different point of views.
The efficacy of ergometry determined heart rates for flatwater kayak training.
van Someren, K A; Oliver, J E
2002-01-01
The aim of this study was to investigate the use of incremental ergometry determined heart rate training intensities for the control of kayak ergometer and open water kayak training. Eight well-trained male kayakers completed a maximal incremental exercise test on an air-braked kayak ergometer for the determination of LT(1) (the power output at which blood lactate concentration increased by > or = 1 mmol x L(-1)), the associated heart rate (HR-LT(1)), VO(2)peak, maximal heart rate and maximal aerobic power. Subjects then performed 20 min trials of kayak ergometry (E), open water kayaking in a single kayak (K1) and open water kayaking in a four-seat kayak (K4) at HR-LT(1). During the three trials, heart rate was continuously measured, and blood lactate concentration, rating of perceived exertion (RPE) and stroke rate were determined every 5 min. In all trials, exercise at HR-LT(1) resulted in stable blood lactate concentrations and a stable RPE. Comparison of the three trials demonstrated that the only difference was for RPE, which was lower in (K4) than in (E), (p kayak ergometer and open water kayak training in both single and team boats.
Song, Dean; Liu, Huijuan; Qiang, Zhimin; Qu, Jiuhui
2014-05-15
Free chlorine is extensively used for water and wastewater disinfection nowadays. However, it still remains a big challenge to determine the rate constants of rapid chlorination reactions although competition kinetics and stopped-flow spectrophotometric (SFS) methods have been employed individually to investigate fast reaction kinetics. In this work, we proposed an SFS competition kinetics method to determine the rapid chlorination rate constants by using a common colorimetric reagent, N,N-diethyl-p-phenylenediamine (DPD), as a reference probe. A kinetic equation was first derived to estimate the reaction rate constant of DPD towards chlorine under a given pH and temperature condition. Then, on that basis, an SFS competition kinetics method was proposed to determine directly the chlorination rate constants of several representative compounds including tetracycline, ammonia, and four α-amino acids. Although Cl2O is more reactive than HOCl, its contribution to the overall chlorination kinetics of the test compounds could be neglected in this study. Finally, the developed method was validated through comparing the experimentally measured chlorination rate constants of the selected compounds with those obtained or calculated from literature and analyzing with Taft's correlation as well. This study demonstrates that the SFS competition kinetics method can measure the chlorination rate constants of a test compound rapidly and accurately.
Linear and Non-Linear Associations of Gonorrhea Diagnosis Rates with Social Determinants of Health
Hazel D. Dean
2012-09-01
Full Text Available Identifying how social determinants of health (SDH influence the burden of disease in communities and populations is critically important to determine how to target public health interventions and move toward health equity. A holistic approach to disease prevention involves understanding the combined effects of individual, social, health system, and environmental determinants on geographic area-based disease burden. Using 2006–2008 gonorrhea surveillance data from the National Notifiable Sexually Transmitted Disease Surveillance and SDH variables from the American Community Survey, we calculated the diagnosis rate for each geographic area and analyzed the associations between those rates and the SDH and demographic variables. The estimated product moment correlation (PMC between gonorrhea rate and SDH variables ranged from 0.11 to 0.83. Proportions of the population that were black, of minority race/ethnicity, and unmarried, were each strongly correlated with gonorrhea diagnosis rates. The population density, female proportion, and proportion below the poverty level were moderately correlated with gonorrhea diagnosis rate. To better understand relationships among SDH, demographic variables, and gonorrhea diagnosis rates, more geographic area-based estimates of additional variables are required. With the availability of more SDH variables and methods that distinguish linear from non-linear associations, geographic area-based analysis of disease incidence and SDH can add value to public health prevention and control programs.
Development of a General Method for Determining Leak Rates from Limiting Enclosures
Zografos, A. I.; Blackwell, C. C.; Harper, Lynn D. (Technical Monitor)
1994-01-01
This paper discusses the development of a general method for the determination of very low leak rates from limiting enclosures. There are many methods that can be used to detect and repair leaks from enclosures. Many methods have also been proposed that allow the estimation of actual leak rates, usually expressed as enclosure volume turnover. The proposed method combines measurements of the state variables (pressure, temperature, and volume) as well as the change in the concentration of a tracer gas to estimate the leak rate. The method was applied to the containment enclosure of the Engineering Development Unit of the CELSS Test Facility, currently undergoing testing at the NASA Ames Research Center.
Motivation for error-tolerant communication
Halbach, Till
2002-01-01
The transmission of large data streams over error-prone channels as e.g. in multimedia applications is inherently linked to long transmission delays if automatic repeat request schemes are used. As this article will show, the delay can be reasonably traded against residual bit errors if a short transmission time has highest priority. The dependency of the delay on two important factors, packet length and channel bit error rate, is determined to be non-linear and strictly monotonously growing. Furthermore, transmission behavior and properties of a plain binary symmetric channel and one with additional repeat request technique are simulated and compared to previous research. The simulations finally lead to a redefinition of the formula for the estimation of the residual bit error rate of a non-transparent channel.
Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)
2000-01-01
Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.
无
2006-01-01
In this article, principle and mathematical method of determining the phase fractions of multiphase flows by using a dual-energy γ-ray system have been described. The dual-energy γ-ray device is composed of radioactive isotopes of 241Am and 137Cs with γ-ray energies of 59.5 and 662 keV, respectively. A rational method to calibrate the absorption coefficient was introduced in detail. The modified arithmetic is beneficial to removing the extra Compton scattering from the measured value. The result shows that the dual-energy γ-ray technique can be used in three-phase flow with average accuracy greater than 95%, which enables us to determine phase fractions almost independent of the flow regime. Improvement has been achieved on measurement accuracy of phase fractions.
Determinants of Credit Rating and Optimal Capital Structure among Pakistani Banks
Vivake Anand
2016-06-01
Full Text Available Firm’s credit rating and optimal capital structure are directly related. Firms with high crediting rating tend to finance more by debts. However, there is no appropriate figure available for optimal capital structure in literature. Firms mostly decide mix of debt and equity based on its operating environment. Knowing fact of high credibility among locals and lower costs associated with debts, managers prefer debts to equity. This paper used factors like profitability, liquidity, firm’s size and leverage, to determine crediting rating of firms. This paper has used data from balance sheets of top twenty banks in Pakistan for last seven years. It was found that profitability and liquidity have negative impacts on credit rating of banks in Pakistan, while size and leverage being more significant have positive correlation with credit rating.
GPS determination of walking rates in captive African elephants (Loxodonta africana).
Leighty, Katherine A; Soltis, Joseph; Wesolek, Christina M; Savage, Anne; Mellen, Jill; Lehnhardt, John
2009-01-01
The movements of elephants in captivity have been an issue of concern for animal welfare activists and zoological professionals alike in recent years. In order to fully understand how movement rates reflect animal welfare, we must first determine the exact distances these animals move in the captive environment. We outfitted seven adult female African elephants (Loxodonta africana) at Disney's Animal Kingdom with collar-mounted global positioning recording systems to document their movement rates while housed in outdoor guest viewing habitats. Further, we conducted preliminary analyses to address potential factors impacting movement rates including body size, temperature, enclosure size, and social grouping complexity. We found that our elephants moved at an average rate of 0.409+/-0.007 km/hr during the 9-hr data collection periods. This rate translates to an average of 3.68 km traveled during the observation periods, at a rate comparable to that observed in the wild. Although movement rate did not have a significant relationship with an individual's body size in this herd, the movements of four females demonstrated a significant positive correlation with temperature. Further, females in our largest social group demonstrated a significant increase in movement rates when residing in larger enclosures. We also present preliminary evidence suggesting that increased social group complexity, including the presence of infants in the herd, may be associated with increased walking rates, whereas factors such as reproductive and social status may constrain movements.
Lee, Seung Min; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejon (Korea, Republic of); Kim, Jong Hyun [KEPCO International Nuclear Graduate School, Seosaeng (Korea, Republic of); Kim, Man Cheol [Chung-Ang University, Seoul (Korea, Republic of)
2015-05-15
Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem.
2011-01-05
... From the Federal Register Online via the Government Publishing Office LIBRARY OF CONGRESS Copyright Royalty Board Determination of Rates and Terms for Preexisting Subscription and Satellite Digital... subscription and satellite digital audio radio services for the digital performance of sound recordings and...
31 CFR 351.15 - Is the determination of the Secretary on rates and values final?
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Is the determination of the Secretary on rates and values final? 351.15 Section 351.15 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC...
Omeroglu, Esra; Buyukozturk, Sener; Aydogan, Yasemin; Cakan, Mehtap; Cakmak, Ebru Kilic; Ozyurek, Arzu; Akduman, Gulumser Gultekin; Gunindi, Yunus; Kutlu, Omer; Coban, Aysel; Yurt, Ozlem; Kogar, Hakan; Karayol, Seda
2015-01-01
This study aimed to determine and interpret norms of the Preschool Social Skills Rating Scale (PSSRS) teacher form. The sample included 224 independent preschools and 169 primary schools. The schools are distributed among 48 provinces and 3324 children were included. Data were obtained from the PSSRS teacher form. The validity and reliability…
Determination of flow-rate characteristics and parameters of piezo pilot valves
Takosoglu Jakub
2017-01-01
Full Text Available Pneumatic directional valves are used in most industrial pneumatic systems. Most of them are two-stage valves controlled by a pilot valve. Pilot valves are often chosen randomly. Experimental studies in order to determine the flow-rate characteristics and parameters of pilot valves were not conducted. The paper presents experimental research of two piezo pilot valves.
A Direct inverse model to determine permeability fields from pressure and flow rate measurements
Brouwer, G.K.; Fokker, P.A.; Wilschut, F.; Zijl, W.
2008-01-01
The determination of the permeability field from pressure and flow rate measurements in wells is a key problem in reservoir engineering. This paper presents a Double Constraint method for inverse modeling that is an example of direct inverse modeling. The method is used with a standard block-centere
Relationship of Social Determinants of Health with the Three-year Survival Rate of Breast Cancer
Davoudi Monfared, Esmat; Mohseny, Maryam; Amanpour, Farzaneh; Mosavi Jarrahi, Alireza; Moradi Joo, Mohammad; Heidarnia, Mohammad Ali
2017-04-01
Background: Social determinants of health are among the key factors affecting the pathogenesis of diseases. Considering the increasingly high prevalence of breast cancer and the association of social determinants of health with its occurrence, related morbidity and mortality and survival rate, this study sought to assess the relationship of three-year survival rate of breast cancer with social determinants of health. Materials and Methods: This cohort study was conducted on males and females presenting to the Cancer Research Center of Shohada-E-Tajrish Hospital from 2006 to 2010 with definite diagnosis of breast cancer. Data were collected via phone interviews. Kaplan-Meier and Cox regression was fitted using SPSS (version 18) and PH assumption was tested by STATA (version 11) software. Results: The study was performed on 797 breast cancer patients, aged 25-93 years with mean age of 54.66 (SD=11.86) years. After 3 years from diagnosing cancer 700 (87.8%) patients were alive and 97 (12.2%) patients were dead. Using log rank test, there was relationship between 3-year survivals with age, education, childhood residence, sibling, treatment type, and district were significant (pSocial determinants of health such as childhood condition, city region residency, level of education and age affect the three-year survival rate of breast cancer. Future studies must focus on the effect of childhood social class on the survival rates of cancers, which have been paid less attention to. Creative Commons Attribution License
A Classroom Experiment on Exchange Rate Determination with Purchasing Power Parity
Mitchell, David T.; Rebelein, Robert P.; Schneider, Patricia H.; Simpson, Nicole B.; Fisher, Eric
2009-01-01
The authors developed a classroom experiment on exchange rate determination appropriate for undergraduate courses in macroeconomics and international economics. In the experiment, students represent citizens from different countries and need to obtain currency to purchase goods. By participating in an auction to buy currency, students gain a…
CERTAINTY EQUIVALENCE FOR DETERMINATION OF OPTIMAL FERTILIZER APPLICATION RATES WITH CARRY-OVER
Taylor, C. Robert
1983-01-01
This note demonstrates that a certain class of stochastic problems for determination of optimal fertilizer application rates in the presence of fertilizer carry-over can be simplified to static, certainly equivalent problems. Conditions required for certainty equivalence to hold are: (1) fertilizer carry-over is agronomically equivalent to applied fertilizer; and (2) some addition of fertilizer is optimal in every decision period.
A Classroom Experiment on Exchange Rate Determination with Purchasing Power Parity
Mitchell, David T.; Rebelein, Robert P.; Schneider, Patricia H.; Simpson, Nicole B.; Fisher, Eric
2009-01-01
The authors developed a classroom experiment on exchange rate determination appropriate for undergraduate courses in macroeconomics and international economics. In the experiment, students represent citizens from different countries and need to obtain currency to purchase goods. By participating in an auction to buy currency, students gain a…
Probing the Rate-Determining Step of the Claisen-Schmidt Condensation by Competition Reactions
Mak, Kendrew K. W.; Chan, Wing-Fat; Lung, Ka-Ying; Lam, Wai-Yee; Ng, Weng-Cheong; Lee, Siu-Fung
2007-01-01
Competition experiments are a useful tool for preliminary study of the linear free energy relationship of organic reactions. This article describes a physical organic experiment for upper-level undergraduates to identify the rate-determining step of the Claisen-Schmidt condensation of benzaldehyde and acetophenone by studying the linear free…
Health Sector Inflation Rate and its Determinants in Iran: A Longitudinal Study (1995–2008)
TEIMOURIZAD, Abedin; HADIAN, Mohamad; REZAEI, Satar; HOMAIE RAD, Enayatollah
2014-01-01
Abstract Background Health price inflation rate is different from increasing in health expenditures. Health expenditures contain both quantity and prices but inflation rate contains prices. This study aimed to determine the factors that affect the Inflation Rate for Health Care Services (IRCPIHC) in Iran. Methods We used Central Bank of Iran data. We estimated the relationship between the inflation rate and its determinants using dynamic factor variable approach. For this purpose, we used STATA software. Results The study results revealed a positive relationship between the overall inflation as well as the number of dentists and health inflation. However, number of beds and physicians per 1000 people had a negative relationship with health inflation. Conclusion When the number of hospital beds and doctors increased, the competition between them increased, as well, thereby decreasing the inflation rate. Moreover, dentists and drug stores had the conditions of monopoly markets; therefore, they could change the prices easier compared to other health sectors. Health inflation is the subset of growth in health expenditures and the determinants of health expenditures are not similar to health inflation. PMID:26060721
Health Sector Inflation Rate and its Determinants in Iran: A Longitudinal Study (1995-2008).
Teimourizad, Abedin; Hadian, Mohamad; Rezaei, Satar; Homaie Rad, Enayatollah
2014-11-01
Health price inflation rate is different from increasing in health expenditures. Health expenditures contain both quantity and prices but inflation rate contains prices. This study aimed to determine the factors that affect the Inflation Rate for Health Care Services (IRCPIHC) in Iran. We used Central Bank of Iran data. We estimated the relationship between the inflation rate and its determinants using dynamic factor variable approach. For this purpose, we used STATA software. The study results revealed a positive relationship between the overall inflation as well as the number of dentists and health inflation. However, number of beds and physicians per 1000 people had a negative relationship with health inflation. When the number of hospital beds and doctors increased, the competition between them increased, as well, thereby decreasing the inflation rate. Moreover, dentists and drug stores had the conditions of monopoly markets; therefore, they could change the prices easier compared to other health sectors. Health inflation is the subset of growth in health expenditures and the determinants of health expenditures are not similar to health inflation.
WEN Xianming; MA Peihua; ZHU Geqin; WU Zhiming
2006-01-01
Chemical interferences (ionization and oxide/hydroxide formation) on the atomic absorbance signal of lithium in FAAS analysis of brine samples are elaborated in this article. It is suggested that inadequate or overaddition of deionization buffers can lead to loss of sensitivities under particular operating conditions. In the analysis of brine samples, signal enhancing and oxide/hydroxide formation inducing signal reduction resulting from overaddition of deionization buffers can be seen with varying amounts of chemical buffers. Based on experimental results, the authors have arrived at the op timized operating conditions for the detection of lithium, under which both ionization and stable compound formation can be suppressed. This is a simplified and quick method with adequate accuracy and precision for the determination of lithium in routine brine samples from chemical plants or R&D laboratories, which contain comparable amounts of lithium with some other components.
KHAN, NASIM A.; SPENCER, HORACE J.; ABDA, ESAM; AGGARWAL, AMITA; ALTEN, RIEKE; ANCUTA, CODRINA; ANDERSONE, DAINA; BERGMAN, MARTIN; CRAIG-MULLER, JURGEN; DETERT, JACQUELINE; GEORGESCU, LIA; GOSSEC, LAURE; HAMOUD, HISHAM; JACOBS, JOHANNES W. G.; LAURINDO, IEDA MARIA MAGALHAES; MAJDAN, MARIA; NARANJO, ANTONIO; PANDYA, SAPAN; POHL, CHRISTOF; SCHETT, GEORG; SELIM, ZAHRAA I.; TOLOZA, SERGIO; YAMANAKA, HISAHI; SOKKA, TUULIKKI
2013-01-01
Objective To assess the determinants of patients’ (PTGL) and physicians’ (MDGL) global assessment of rheumatoid arthritis (RA) activity and factors associated with discordance among them. Methods A total of 7,028 patients in the Quantitative Standard Monitoring of Patients with RA study had PTGL and MDGL assessed at the same clinic visit on a 0–10-cm visual analog scale (VAS). Three patient groups were defined: concordant rating group (PTGL and MDGL within ±2 cm), higher patient rating group (PTGL exceeding MDGL by >2 cm), and lower patient rating group (PTGL less than MDGL by >2 cm). Multivariable regression analysis was used to identify determinants of PTGL and MDGL and their discordance. Results The mean ± SD VAS scores for PTGL and MDGL were 4.01 ± 2.70 and 2.91 ± 2.37, respectively. Pain was overwhelmingly the single most important determinant of PTGL, followed by fatigue. In contrast, MDGL was most influenced by swollen joint count (SJC), followed by erythrocyte sedimentation rate (ESR) and tender joint count (TJC). A total of 4,454 (63.4%), 2,106 (30%), and 468 (6.6%) patients were in the concordant, higher, and lower patient rating groups, respectively. Odds of higher patient rating increased with higher pain, fatigue, psychological distress, age, and morning stiffness, and decreased with higher SJC, TJC, and ESR. Lower patient rating odds increased with higher SJC, TJC, and ESR, and decreased with lower fatigue levels. Conclusion Nearly 36% of patients had discordance in RA activity assessment from their physicians. Sensitivity to the “disease experience” of patients, particularly pain and fatigue, is warranted for effective care of RA. PMID:22052672
23 CFR 1240.11 - Determination of State seat belt use rate for calendar years 1996 and 1997.
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Determination of State seat belt use rate for calendar... SEAT BELTS-ALLOCATIONS BASED ON SEAT BELT USE RATES Determination of Allocations § 1240.11 Determination of State seat belt use rate for calendar years 1996 and 1997. (a) Review of...
23 CFR 1240.12 - Determination of State seat belt use rate for calendar year 1998 and beyond.
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Determination of State seat belt use rate for calendar... FOR USE OF SEAT BELTS-ALLOCATIONS BASED ON SEAT BELT USE RATES Determination of Allocations § 1240.12 Determination of State seat belt use rate for calendar year 1998 and beyond. (a) State seat belt use survey....
A simulation model for the determination of tabarru' rate in a family takaful
Ismail, Hamizun bin
2014-06-01
The concept of tabarru' that is incorporated in family takaful serves to eliminate the element of uncertainty in the contract as a participant agree to relinquish as donation certain portion of his contribution. The most important feature in family takaful is that it does not guarantee a definite return on a participant's contribution, unlike its conventional counterpart where a premium is paid in return for a guaranteed amount of insurance benefit. In other words, investment return on contributed funds by the participants are based on actual investment experience. The objective of this study is to set up a framework for the determination of tabarru' rate by simulation. The model is based on binomial death process. Specifically, linear tabarru' rate and flat tabarru' rate are introduced. The results of the simulation trials show that the linear assumption on the tabarru' rate has an advantage over the flat counterpart as far as the risk of the investment accumulation on maturity is concerned.
Groth, Angela; Maurer, Claudia; Reiser, Martin; Kranert, Martin
2015-02-01
The aim of the work was to establish a method for emission control of biogas plants especially the observation of fugitive methane emissions. The used method is in a developmental stage but the topic is crucial to environmental and economic issues. A remote sensing measurement method was adopted to determine methane emission rates of a biogas plant in Rhineland-Palatinate, Germany. An inverse dispersion model was used to deduce emission rates. This technique required one concentration measurement with an open path tunable diode laser absorption spectrometer (TDLAS) downwind and upwind the source and basic wind information, like wind speed and direction. Different operating conditions of the biogas plant occurring on the measuring day (December 2013) could be represented roughly in the results. During undisturbed operational modes the methane emission rate averaged 2.8 g/s, which corresponds to 4% of the methane gas production rate of the biogas plant.
Casal, J; Arias, J M; Gómez-Camacho, J
2016-01-01
A relationship between the Coulomb inclusive break-up probability and the radiative capture reaction rate for weakly-bound three-body systems is established. This direct link provides a robust procedure to estimate the reaction rate for nuclei of astrophysical interest by measuring inclusive break-up processes at different energies and angles. This might be an advantageous alternative to the determination of reaction rates from the measurement of $B(E1)$ distributions through exclusive Coulomb break-up experiments. In addition, it provides a reference to assess the validity of different theoretical approaches that have been used to calculate reaction rates. The procedure is applied to $^{11}$Li ($^{9}$Li+n+n) and $^6$He ($^{4}$He+n+n) three-body systems for which some data exist.
Casal, J.; Rodríguez-Gallardo, M.; Arias, J. M.; Gómez-Camacho, J.
2016-04-01
A relationship between the Coulomb inclusive break-up probability and the radiative capture reaction rate for weakly bound three-body systems is established. This direct link provides a robust procedure to estimate the reaction rate for nuclei of astrophysical interest by measuring inclusive break-up processes at different energies and angles. This might be an advantageous alternative to the determination of reaction rates from the measurement of B (E 1 ) distributions through exclusive Coulomb break-up experiments. In addition, it provides a reference to assess the validity of different theoretical approaches that have been used to calculate reaction rates. The procedure is applied to 11Li (9Li+n +n ) and 6He (4He+n +n ) three-body systems for which some data exist.
Ramlall Indranarain
2016-09-01
Full Text Available This study innovates from prior research which focuses on the determinants of sovereign ratings and credit default swap spreads for a large sample of countries by incorporating the quality of central banks, let alone refined proxies. Findings show that the explanatory power of both sovereign ratings and CDS spreads model improve by a hefty 11 percent in case of sovereign ratings and 6 to 9 percent in the case of CDS spreads when central bank quality is incorporated. Such a finding bolters the notion that institutional quality does play a preponderant role when it comes to assessing country risk, making it a systematic component of institutional quality. The effect of labour participation implies that countries buffeted by stronger effects of an ageing population have greater propensity of increases in CDS spreads. Evidence is also found as to the driving dynamics of CDS spreads and sovereign ratings to be distinct. Our results hold robust post tackling for endogeneity problem.
Reliability assessment to determine the optimal forced outage rate of components
Habib Daryabad
2014-03-01
Full Text Available Determining the optimal forced outage rate (FOR ofcomponents can lead to reducing the operational and maintenance costs inelectric power systems. FOR is closely associated with two factors: number ofoutages and duration of outages. Therefore, it is possible to decrease the FORthrough decreasing the number of outages or reducing the duration ofoutages. Decreasing number of outages is usually carried out throughreinforcement of the network and reducing the duration of outages is mainlyperformed through increasing the repair and maintenance groups. Both of theproposed methods to decrease the FOR possess the costs. Therefore, it is verysuitable to find the optimal rate of FOR and avoiding unnecessary costs. Thispaper presents a new methodology to find the optimal rate of FOR. In thisregard, the system reliability is assessed and evaluated from view of FOR andthe optimal rate of FOR is denoted for all components.