Maximum entropy deconvolution of low count nuclear medicine images
International Nuclear Information System (INIS)
McGrath, D.M.
1998-12-01
Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were
Counting OCR errors in typeset text
Sandberg, Jonathan S.
1995-03-01
Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.
Correcting for particle counting bias error in turbulent flow
Edwards, R. V.; Baratuci, W.
1985-01-01
An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
Peak-counts blood flow model-errors and limitations
International Nuclear Information System (INIS)
Mullani, N.A.; Marani, S.K.; Ekas, R.D.; Gould, K.L.
1984-01-01
The peak-counts model has several advantages, but its use may be limited due to the condition that the venous egress may not be negligible at the time of peak-counts. Consequently, blood flow measurements by the peak-counts model will depend on the bolus size, bolus duration, and the minimum transit time of the bolus through the region of interest. The effect of bolus size on the measurement of extraction fraction and blood flow was evaluated by injecting 1 to 30ml of rubidium chloride in the femoral vein of a dog and measuring the myocardial activity with a beta probe over the heart. Regional blood flow measurements were not found to vary with bolus sizes up to 30ml. The effect of bolus duration was studied by injecting a 10cc bolus of tracer at different speeds in the femoral vein of a dog. All intravenous injections undergo a broadening of the bolus duration due to the transit time of the tracer through the lungs and the heart. This transit time was found to range from 4-6 second FWHM and dominates the duration of the bolus to the myocardium for up to 3 second injections. A computer simulation has been carried out in which the different parameters of delay time, extraction fraction, and bolus duration can be changed to assess the errors in the peak-counts model. The results of the simulations show that the error will be greatest for short transit time delays and for low extraction fractions
Energy Technology Data Exchange (ETDEWEB)
Liang, A K; Koniczek, M; Antonuk, L E; El-Mohri, Y; Zhao, Q [University of Michigan, Ann Arbor, MI (United States)
2016-06-15
Purpose: Photon counting arrays (PCAs) offer several advantages over conventional, fluence-integrating x-ray imagers, such as improved contrast by means of energy windowing. For that reason, we are exploring the feasibility and performance of PCA pixel circuitry based on polycrystalline silicon. This material, unlike the crystalline silicon commonly used in photon counting detectors, lends itself toward the economic manufacture of radiation tolerant, monolithic large area (e.g., ∼43×43 cm2) devices. In this presentation, exploration of maximum count rate, a critical performance parameter for such devices, is reported. Methods: Count rate performance for a variety of pixel circuit designs was explored through detailed circuit simulations over a wide range of parameters (including pixel pitch and operating conditions) with the additional goal of preserving good energy resolution. The count rate simulations assume input events corresponding to a 72 kVp x-ray spectrum with 20 mm Al filtration interacting with a CZT detector at various input flux rates. Output count rates are determined at various photon energy threshold levels, and the percentage of counts lost (e.g., due to deadtime or pile-up) is calculated from the ratio of output to input counts. The energy resolution simulations involve thermal and flicker noise originating from each circuit element in a design. Results: Circuit designs compatible with pixel pitches ranging from 250 to 1000 µm that allow count rates over a megacount per second per pixel appear feasible. Such rates are expected to be suitable for radiographic and fluoroscopic imaging. Results for the analog front-end circuitry of the pixels show that acceptable energy resolution can also be achieved. Conclusion: PCAs created using polycrystalline silicon have the potential to offer monolithic large-area detectors with count rate performance comparable to those of crystalline silicon detectors. Further improvement through detailed circuit
Errors associated with moose-hunter counts of occupied beaver Castor fiber lodges in Norway
Parker, Howard; Rosell, Frank; Gustavsen, Per Øyvind
2002-01-01
In Norway, Sweden and Finland moose Alces alces hunting teams are often employed to survey occupied beaver (Castor fiber and C. canadensis) lodges while hunting. Results may be used to estimate population density or trend, or for issuing harvest permits. Despite the method's increasing popularity, the errors involved have never been identified. In this study we 1) compare hunting-team counts of occupied lodges with total counts, 2) identify the sources of error between counts and 3) evaluate ...
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
DEFF Research Database (Denmark)
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
International Nuclear Information System (INIS)
Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam; Park, Hoon Hee
2013-01-01
This study is aimed to evaluate the effect of T 1/2 upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9 9m TcO 4 - of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ 2 test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T 1/2 error from change of gradient with -0.25% to +0.25%, if T 1/2 is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T 1/2 error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation measurement. Especially, we found a
Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels
Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.
2018-01-01
A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…
BMRC: A Bitmap-Based Maximum Range Counting Approach for Temporal Data in Sensor Monitoring Networks
Directory of Open Access Journals (Sweden)
Bin Cao
2017-09-01
Full Text Available Due to the rapid development of the Internet of Things (IoT, many feasible deployments of sensor monitoring networks have been made to capture the events in physical world, such as human diseases, weather disasters and traffic accidents, which generate large-scale temporal data. Generally, the certain time interval that results in the highest incidence of a severe event has significance for society. For example, there exists an interval that covers the maximum number of people who have the same unusual symptoms, and knowing this interval can help doctors to locate the reason behind this phenomenon. As far as we know, there is no approach available for solving this problem efficiently. In this paper, we propose the Bitmap-based Maximum Range Counting (BMRC approach for temporal data generated in sensor monitoring networks. Since sensor nodes can update their temporal data at high frequency, we present a scalable strategy to support the real-time insert and delete operations. The experimental results show that the BMRC outperforms the baseline algorithm in terms of efficiency.
International Nuclear Information System (INIS)
Vincent, C.H.
1982-01-01
Bayes' principle is applied to the differential counting measurement of a positive quantity in which the statistical errors are not necessarily small in relation to the true value of the quantity. The methods of estimation derived are found to give consistent results and to avoid the anomalous negative estimates sometimes obtained by conventional methods. One of the methods given provides a simple means of deriving the required estimates from conventionally presented results and appears to have wide potential applications. Both methods provide the actual posterior probability distribution of the quantity to be measured. A particularly important potential application is the correction of counts on low radioacitvity samples for background. (orig.)
Range walk error correction and modeling on Pseudo-random photon counting system
Shen, Shanshan; Chen, Qian; He, Weiji
2017-08-01
Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
International Nuclear Information System (INIS)
Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa
2015-01-01
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
Energy Technology Data Exchange (ETDEWEB)
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Somsri Wiwanitkit
2016-08-01
Full Text Available Dengue is a common tropical infection that is still a global health threat. An important laboratory parameter for the management of dengue is platelet count. Platelet count is an useful test for diagnosis and following up on dengue. However, errors in laboratory reports can occur. This study is a retrospective analysis on laboratory report data of complete blood count in cases with suspicious dengue in a medical center within 1 month period during the outbreak season on October, 2015. According to the studied period, there were 184 requests for complete blood count for cases suspected for dengue. From those 184 laboratory report records, errors can be seen in 12 reports (6.5%. This study demonstrates that there are considerable high rate of post-analytical errors in laboratory reports. Interestingly, the platelet count in those erroneous reports can be unreliable and ineffective or problematic when it is used for the management of dengue suspicious patients.
Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders
2010-06-01
Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Johann A. Briffa
2014-06-01
Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
RCT: Module 2.03, Counting Errors and Statistics, Course 8768
Energy Technology Data Exchange (ETDEWEB)
Hillmer, Kurt T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-01
Radiological sample analysis involves the observation of a random process that may or may not occur and an estimation of the amount of radioactive material present based on that observation. Across the country, radiological control personnel are using the activity measurements to make decisions that may affect the health and safety of workers at those facilities and their surrounding environments. This course will present an overview of measurement processes, a statistical evaluation of both measurements and equipment performance, and some actions to take to minimize the sources of error in count room operations. This course will prepare the student with the skills necessary for radiological control technician (RCT) qualification by passing quizzes, tests, and the RCT Comprehensive Phase 1, Unit 2 Examination (TEST 27566) and by providing in the field skills.
A rotation-symmetric, position-sensitive annular detector for maximum counting rates
International Nuclear Information System (INIS)
Igel, S.
1993-12-01
The Germanium Wall is a semiconductor detector system containing up to four annular position sensitive ΔE-detectors from high purity germanium (HPGe) planned to complement the BIG KARL spectrometer in COSY experiments. The first diode of the system, the Quirl-detector, has a two dimensional position sensitive structure defined by 200 Archimedes' spirals on each side with opposite orientation. In this way about 40000 pixels are defined. Since each spiral element detects almost the same number of events in an experiment the whole system can be optimized for maximal counting rates. This paper describes a test setup for a first prototype of the Quirl-detector and the results of test measurements with an α-source. The detector current and the electrical separation of the spiral elements were measured. The splitting of signals due to the spread of charge carriers produced by an incident ionizing particle on several adjacent elements was investigated in detail and found to be twice as high as expected from calculations. Its influence on energy and position resolution is discussed. Electronic crosstalk via signal wires and the influence of noise from the magnetic spectrometer has been tested under experimental conditions. Additionally, vacuum feedthroughs based on printed Kapton foils pressed between Viton seals were fabricated and tested successfully concerning their vacuum and thermal properties. (orig.)
Curtis, Tyler E; Roeder, Ryan K
2017-10-01
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in
Sources and magnitude of sampling error in redd counts for bull trout
Jason B. Dunham; Bruce Rieman
2001-01-01
Monitoring of salmonid populations often involves annual redd counts, but the validity of this method has seldom been evaluated. We conducted redd counts of bull trout Salvelinus confluentus in two streams in northern Idaho to address four issues: (1) relationships between adult escapements and redd counts; (2) interobserver variability in redd...
Regional compensation for statistical maximum likelihood reconstruction error of PET image pixels
International Nuclear Information System (INIS)
Forma, J; Ruotsalainen, U; Niemi, J A
2013-01-01
In positron emission tomography (PET), there is an increasing interest in studying not only the regional mean tracer concentration, but its variation arising from local differences in physiology, the tissue heterogeneity. However, in reconstructed images this physiological variation is shadowed by a large reconstruction error, which is caused by noisy data and the inversion of tomographic problem. We present a new procedure which can quantify the error variation in regional reconstructed values for given PET measurement, and reveal the remaining tissue heterogeneity. The error quantification is made by creating and reconstructing the noise realizations of virtual sinograms, which are statistically similar with the measured sinogram. Tests with physical phantom data show that the characterization of error variation and the true heterogeneity are possible, despite the existing model error when real measurement is considered. (paper)
Maximum error-bounded Piecewise Linear Representation for online stream approximation
Xie, Qing; Pang, Chaoyi; Zhou, Xiaofang; Zhang, Xiangliang; Deng, Ke
2014-01-01
Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.
Maximum error-bounded Piecewise Linear Representation for online stream approximation
Xie, Qing
2014-04-04
Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Karátson, J.
2017-01-01
Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www. science direct.com/ science /article/pii/S0377042716301492?via%3Dihub
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Karátson, J.
2017-01-01
Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716301492?via%3Dihub
Chan, Leo Li-Ying; Laverty, Daniel J; Smith, Tim; Nejad, Parham; Hei, Hillary; Gandhi, Roopali; Kuksin, Dmitry; Qiu, Jean
2013-02-28
Peripheral blood mononuclear cells (PBMCs) have been widely researched in the fields of immunology, infectious disease, oncology, transplantation, hematological malignancy, and vaccine development. Specifically, in immunology research, PBMCs have been utilized to monitor concentration, viability, proliferation, and cytokine production from immune cells, which are critical for both clinical trials and biomedical research. The viability and concentration of isolated PBMCs are traditionally measured by manual counting with trypan blue (TB) using a hemacytometer. One of the common issues of PBMC isolation is red blood cell (RBC) contamination. The RBC contamination can be dependent on the donor sample and/or technical skill level of the operator. RBC contamination in a PBMC sample can introduce error to the measured concentration, which can pass down to future experimental assays performed on these cells. To resolve this issue, RBC lysing protocol can be used to eliminate potential error caused by RBC contamination. In the recent years, a rapid fluorescence-based image cytometry system has been utilized for bright-field and fluorescence imaging analysis of cellular characteristics (Nexcelom Bioscience LLC, Lawrence, MA). The Cellometer image cytometry system has demonstrated the capability of automated concentration and viability detection in disposable counting chambers of unpurified mouse splenocytes and PBMCs stained with acridine orange (AO) and propidium iodide (PI) under fluorescence detection. In this work, we demonstrate the ability of Cellometer image cytometry system to accurately measure PBMC concentration, despite RBC contamination, by comparison of five different total PBMC counting methods: (1) manual counting of trypan blue-stained PBMCs in hemacytometer, (2) manual counting of PBMCs in bright-field images, (3) manual counting of acetic acid lysing of RBCs with TB-stained PBMCs, (4) automated counting of acetic acid lysing of RBCs with PI-stained PBMCs
J.M. Hull; A.M. Fish; J.J. Keane; S.R. Mori; B.J Sacks; A.C. Hull
2010-01-01
One of the primary assumptions associated with many wildlife and population trend studies is that target species are correctly identified. This assumption may not always be valid, particularly for species similar in appearance to co-occurring species. We examined size overlap and identification error rates among Cooper's (Accipiter cooperii...
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Directory of Open Access Journals (Sweden)
Zhiyuan Liu
2017-01-01
Full Text Available This study proposes a practical trial-and-error method to solve the optimal toll design problem of cordon-based pricing, where only the traffic counts autonomously collected on the entry links of the pricing cordon are needed. With the fast development and adoption of vehicle-to-infrastructure (V2I facilities, it is very convenient to autonomously collect these data. Two practical properties of the cordon-based pricing are further considered in this article: the toll charge on each entry of one pricing cordon is identical; the total inbound flow to one cordon should be restricted in order to maintain the traffic conditions within the cordon area. Then, the stochastic user equilibrium (SUE with asymmetric link travel time functions is used to assess each feasible toll pattern. Based on a variational inequality (VI model for the optimal toll pattern, this study proposes a theoretically convergent trial-and-error method for the addressed problem, where only traffic counts data are needed. Finally, the proposed method is verified based on a numerical network example.
Detection of anomalies in radio tomography of asteroids: Source count and forward errors
Pursiainen, S.; Kaasalainen, M.
2014-09-01
The purpose of this study was to advance numerical methods for radio tomography in which asteroid's internal electric permittivity distribution is to be recovered from radio frequency data gathered by an orbiter. The focus was on signal generation via multiple sources (transponders) providing one potential, or even essential, scenario to be implemented in a challenging in situ measurement environment and within tight payload limits. As a novel feature, the effects of forward errors including noise and a priori uncertainty of the forward (data) simulation were examined through a combination of the iterative alternating sequential (IAS) inverse algorithm and finite-difference time-domain (FDTD) simulation of time evolution data. Single and multiple source scenarios were compared in two-dimensional localization of permittivity anomalies. Three different anomaly strengths and four levels of total noise were tested. Results suggest, among other things, that multiple sources can be necessary to obtain appropriate results, for example, to distinguish three separate anomalies with permittivity less or equal than half of the background value, relevant in recovery of internal cavities.
Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul
2016-07-01
Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF
Energy Technology Data Exchange (ETDEWEB)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico; Reichardt, Christian L. [School of Physics, University of Melbourne, 313 David Caro building, Swanston St and Tin Alley, Parkville VIC 3010 (Australia); Baxter, Eric J. [Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd Street, Philadelphia, PA 19104 (United States); Bleem, Lindsey E. [Argonne National Laboratory, High-Energy Physics Division, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Crawford, Thomas M. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Holder, Gilbert P. [Department of Astronomy and Department of Physics, University of Illinois, 1002 West Green St., Urbana, IL 61801 (United States); Manzotti, Alessandro, E-mail: srinivasan.raghunathan@unimelb.edu.au, E-mail: s.patil2@student.unimelb.edu.au, E-mail: ebax@sas.upenn.edu, E-mail: federico.bianchini@unimelb.edu.au, E-mail: bleeml@uchicago.edu, E-mail: tcrawfor@kicp.uchicago.edu, E-mail: gholder@illinois.edu, E-mail: manzotti@uchicago.edu, E-mail: christian.reichardt@unimelb.edu.au [Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States)
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Bind, A.K.; Sunil, Saurav; Singh, R.N.; Chakravartty, J.K.
2016-03-01
Recently it was found that maximum load toughness (J max ) for Zr-2.5Nb pressure tube material was practically unaffected by error in Δ a . To check the sensitivity of the J max to error in Δ a measurement, the J max was calculated assuming no crack growth up to the maximum load (P max ) for as received and hydrogen charged Zr-2.5Nb pressure tube material. For load up to the P max , the J values calculated assuming no crack growth (J NC ) were slightly higher than that calculated based on Δ a measured using DCPD technique (JDCPD). In general, error in the J calculation found to be increased exponentially with Δ a . The error in J max calculation was increased with an increase in Δ a and a decrease in J max . Based on deformation theory of J, an analytic criterion was developed to check the insensitivity of the J max to error in Δ a . There was very good linear relation was found between the J max calculated based on Δ a measured using DCPD technique and the J max calculated assuming no crack growth. This relation will be very useful to calculate J max without measuring the crack growth during fracture test especially for irradiated material. (author)
Energy Technology Data Exchange (ETDEWEB)
Kaganovich, Igor D.; Massidda, Scottt; Startsev, Edward A.; Davidson, Ronald C.; Vay, Jean-Luc; Friedman, Alex
2012-06-21
Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the
International Nuclear Information System (INIS)
Kramer, Gary H.; Lynch, Timothy P.; Lopez, Maria A.; Hauck, Brian
2004-01-01
The use of a lung phantom containing 152Eu/241Am activity can provide a sufficient number of energy lines to generate an efficiency calibration for the in vivo measurements of radioactive materials in the lungs. However, due to the number of energy lines associated with 152Eu, coincidence summing occurs and can present a problem when using such a phantom for calibrating lung-counting systems. A Summing Peak Effect Study was conducted at three laboratories to determine the effect of using an efficiency calibration based on a 152Eu/241Am lung phantom. The measurement data at all three laboratories showed the presence of sum peaks. However, two of the three laboratories found only small biases (<5%) when using the 152Eu/241Am calibration. The third facility noted a 25% to 30% positive bias in the 140-keV to 190-keV energy range that prevents the use of the 152Eu/241Am lung phantom for routine calibrations. Although manufactured by different vendors, the three facilities use similar types of detectors (38 cm2 by 25 mm thick or 38 cm2 by 30 mm thick) for counting. These study results underscore the need to evaluate the coincidence summing effect when using a nuclide such as 152Eu for the calibration of low energy lung counting systems
Directory of Open Access Journals (Sweden)
Martin eBouda
2016-02-01
Full Text Available Fractal dimension (FD, estimated by box-counting, is a metric used to characterise plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantisation error (QE, which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterise the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitised in 3D and subjected to box-counts. A pattern search algorithm was used to minimise QE by optimising grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates.QE due to both grid position and orientation was a significant source of error in FD estimates, but pattern search provided an efficient means of minimising it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitisations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did
Determining random counts in liquid scintillation counting
International Nuclear Information System (INIS)
Horrocks, D.L.
1979-01-01
During measurements involving coincidence counting techniques, errors can arise due to the detection of chance or random coincidences in the multiple detectors used. A method and the electronic circuits necessary are here described for eliminating this source of error in liquid scintillation detectors used in coincidence counting. (UK)
Energy Technology Data Exchange (ETDEWEB)
Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H
1999-08-01
This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs.
Energy Technology Data Exchange (ETDEWEB)
Suh, M. Y.; Jee, K. Y.; Park, K. K. [Korea Atomic Energy Research Institute, Taejon (Korea)
1999-08-01
This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiments. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. 11 refs., 6 figs., 8 tabs. (Author)
International Nuclear Information System (INIS)
Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H.
1999-08-01
This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs
International Nuclear Information System (INIS)
Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex
2012-01-01
Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.
... by kidney disease) RBC destruction ( hemolysis ) due to transfusion, blood vessel injury, or other cause Leukemia Malnutrition Bone ... slight risk any time the skin is broken) Alternative Names Erythrocyte count; Red blood cell count; Anemia - RBC count Images Blood test ...
Carb counting; Carbohydrate-controlled diet; Diabetic diet; Diabetes-counting carbohydrates ... Many foods contain carbohydrates (carbs), including: Fruit and fruit juice Cereal, bread, pasta, and rice Milk and milk products, soy milk Beans, legumes, ...
Fetterman, J Gregor; Killeen, P Richard
2010-09-01
Pigeons pecked on three keys, responses to one of which could be reinforced after a few pecks, to a second key after a somewhat larger number of pecks, and to a third key after the maximum pecking requirement. The values of the pecking requirements and the proportion of trials ending with reinforcement were varied. Transits among the keys were an orderly function of peck number, and showed approximately proportional changes with changes in the pecking requirements, consistent with Weber's law. Standard deviations of the switch points between successive keys increased more slowly within a condition than across conditions. Changes in reinforcement probability produced changes in the location of the psychometric functions that were consistent with models of timing. Analyses of the number of pecks emitted and the duration of the pecking sequences demonstrated that peck number was the primary determinant of choice, but that passage of time also played some role. We capture the basic results with a standard model of counting, which we qualify to account for the secondary experiments. Copyright 2010 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Bregnballe, Thomas; Carss, David N; Lorentsen, Svein-Håkon
2013-01-01
This chapter focuses on Cormorant population counts for both summer (i.e. breeding) and winter (i.e. migration, winter roosts) seasons. It also explains differences in the data collected from undertaking ‘day’ versus ‘roost’ counts, gives some definitions of the term ‘numbers’, and presents two...
Woody, Carol Ann; Johnson, D.H.; Shrier, Brianna M.; O'Neal, Jennifer S.; Knutzen, John A.; Augerot, Xanthippe; O'Neal, Thomas A.; Pearsons, Todd N.
2007-01-01
Counting towers provide an accurate, low-cost, low-maintenance, low-technology, and easily mobilized escapement estimation program compared to other methods (e.g., weirs, hydroacoustics, mark-recapture, and aerial surveys) (Thompson 1962; Siebel 1967; Cousens et al. 1982; Symons and Waldichuk 1984; Anderson 2000; Alaska Department of Fish and Game 2003). Counting tower data has been found to be consistent with that of digital video counts (Edwards 2005). Counting towers do not interfere with natural fish migration patterns, nor are fish handled or stressed; however, their use is generally limited to clear rivers that meet specific site selection criteria. The data provided by counting tower sampling allow fishery managers to determine reproductive population size, estimate total return (escapement + catch) and its uncertainty, evaluate population productivity and trends, set harvest rates, determine spawning escapement goals, and forecast future returns (Alaska Department of Fish and Game 1974-2000 and 1975-2004). The number of spawning fish is determined by subtracting subsistence, sport-caught fish, and prespawn mortality from the total estimated escapement. The methods outlined in this protocol for tower counts can be used to provide reasonable estimates ( plus or minus 6%-10%) of reproductive salmon population size and run timing in clear rivers.
A counting-card circuit based on PCI bus
International Nuclear Information System (INIS)
Shi Jing; Li Yong; Chinese Academy of Sciences, Lanzhou; Su Hong; Dong Chengfu; Li Xiaogang; Ma Xiaoli
2004-01-01
A counting-card circuit based on PCI bus that we developed recently used for advanced personal computer will be introduced in this paper briefly. The maximum count capacity of this counting-card is 10 9 -1, ranging from 0 to 999 999 999, the maximum counting time range, 1 x 10 6 s, can be set in 1 cycle, the maximum counting rate is 20 MHz for positive input. (authors)
International Nuclear Information System (INIS)
Matsumoto, Haruya; Kaya, Nobuyuki; Yuasa, Kazuhiro; Hayashi, Tomoaki
1976-01-01
Electron counting method has been devised and experimented for the purpose of measuring electron temperature and density, the most fundamental quantities to represent plasma conditions. Electron counting is a method to count the electrons in plasma directly by equipping a probe with the secondary electron multiplier. It has three advantages of adjustable sensitivity, high sensitivity of the secondary electron multiplier, and directional property. Sensitivity adjustment is performed by changing the size of collecting hole (pin hole) on the incident front of the multiplier. The probe is usable as a direct reading thermometer of electron temperature because it requires to collect very small amount of electrons, thus it doesn't disturb the surrounding plasma, and the narrow sweep width of the probe voltage is enough. Therefore it can measure anisotropy more sensitively than a Langmuir probe, and it can be used for very low density plasma. Though many problems remain on anisotropy, computer simulation has been carried out. Also it is planned to provide a Helmholtz coil in the vacuum chamber to eliminate the effect of earth magnetic field. In practical experiments, the measurement with a Langmuir probe and an emission probe mounted to the movable structure, the comparison with the results obtained in reverse magnetic field by using a Helmholtz coil, and the measurement of ionic sound wave are scheduled. (Wakatsuki, Y.)
Directory of Open Access Journals (Sweden)
Alfredo Tomasetta
2010-06-01
Full Text Available Timothy Williamson supports the thesis that every possible entity necessarily exists and so he needs to explain how a possible son of Wittgenstein’s, for example, exists in our world:he exists as a merely possible object (MPO, a pure locus of potential. Williamson presents a short argument for the existence of MPOs: how many knives can be made by fitting together two blades and two handles? Four: at the most two are concrete objects, the others being merely possible knives and merely possible objects. This paper defends the idea that one can avoid reference and ontological commitment to MPOs. My proposal is that MPOs can be dispensed with by using the notion of rules of knife-making. I first present a solution according to which we count lists of instructions - selected by the rules - describing physical combinations between components. This account, however, has its own difficulties and I eventually suggest that one can find a way out by admitting possible worlds, entities which are more commonly accepted - at least by philosophers - than MPOs. I maintain that, in answering Williamson’s questions, we count classes of physically possible worlds in which the same instance of a general rule is applied.
Compton suppression gamma-counting: The effect of count rate
Millard, H.T.
1984-01-01
Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.
1991-11-27
The data of the 1991 census indicated that the population count of Brazil fell short of a former estimate by 3 million people. The population reached 150 million people with an annual increase of 2%, while projections in the previous decade expected an increase of 2.48% to 153 million people. This reduction indicates more widespread use of family planning (FP) and control of fertility among families of lower social status as more information is being provided to them. However, the Ministry of Health ordered an investigation of foreign family planning organizations because it was suspected that women were forced to undergo tubal ligation during vaccination campaigns. A strange alliance of left wing politicians and the Roman Catholic Church alleges a conspiracy of international FP organizations receiving foreign funds. The FP strategies of Bemfam and Pro-Pater offer women who have little alternative the opportunity to undergo tubal ligation or to receive oral contraceptives to control fertility. The ongoing government program of distributing booklets on FP is feeble and is not backed up by an education campaign. Charges of foreign interference are leveled while the government hypocritically ignores the grave problem of 4 million abortions a year. The population is expected to continue to grow until the year 2040 and then to stabilize at a low growth rate of .4%. In 1980, the number of children per woman was 4.4 whereas the 1991 census figures indicate this has dropped to 3.5. The excess population is associated with poverty and a forsaken caste in the interior. The population actually has decreased in the interior and in cities with 15,000 people. The phenomenon of the drop of fertility associated with rural exodus is contrasted with cities and villages where the population is 20% less than expected.
Accuracy and precision in activation analysis: counting
International Nuclear Information System (INIS)
Becker, D.A.
1974-01-01
Accuracy and precision in activation analysis was investigated with regard to counting of induced radioactivity. The various parameters discussed include configuration, positioning, density, homogeneity, intensity, radioisotopic purity, peak integration, and nuclear constants. Experimental results are presented for many of these parameters. The results obtained indicate that counting errors often contribute significantly to the inaccuracy and imprecision of analyses. The magnitude of these errors range from less than 1 percent to 10 percent or more in many cases
International Nuclear Information System (INIS)
Knoche, H.W.; Parkhurst, A.M.; Tam, S.W.
1979-01-01
The effect of volume on the liquid scintillation counting performance of 14 C-samples has been investigated. A decrease in counting efficiency was observed for samples with volumes below about 6 ml and those above about 18 ml when unquenched samples were assayed. Two quench-correction methods, sample channels ratio and external standard channels ratio, and three different liquid scintillation counters, were used in an investigation to determine the magnitude of the error in predicting counting efficiencies when small volume samples (2 ml) with different levels of quenching were assayed. The 2 ml samples exhibited slightly greater standard deviations of the difference between predicted and determined counting efficiencies than did 15 ml samples. Nevertheless, the magnitude of the errors indicate that if the sample channels ratio method of quench correction is employed, 2 ml samples may be counted in conventional counting vials with little loss in counting precision. (author)
Hoede, C.; Li, Z.
2001-01-01
In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,
Correction to the count-rate detection limit and sample/blank time-allocation methods
International Nuclear Information System (INIS)
Alvarez, Joseph L.
2013-01-01
A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty
Logical error rate scaling of the toric code
International Nuclear Information System (INIS)
Watson, Fern H E; Barrett, Sean D
2014-01-01
To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)
International Nuclear Information System (INIS)
Knuefer; Lindauer
1980-01-01
Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Deep 3 GHz number counts from a P(D) fluctuation analysis
Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Fomalont, E. B.; Kellermann, K. I.; Miller, N.; Perley, R. A.
2014-05-01
Radio source counts constrain galaxy populations and evolution, as well as the global star formation history. However, there is considerable disagreement among the published 1.4-GHz source counts below 100 μJy. Here, we present a statistical method for estimating the μJy and even sub-μJy source count using new deep wide-band 3-GHz data in the Lockman Hole from the Karl G. Jansky Very Large Array. We analysed the confusion amplitude distribution P(D), which provides a fresh approach in the form of a more robust model, with a comprehensive error analysis. We tested this method on a large-scale simulation, incorporating clustering and finite source sizes. We discuss in detail our statistical methods for fitting using Markov chain Monte Carlo, handling correlations, and systematic errors from the use of wide-band radio interferometric data. We demonstrated that the source count can be constrained down to 50 nJy, a factor of 20 below the rms confusion. We found the differential source count near 10 μJy to have a slope of -1.7, decreasing to about -1.4 at fainter flux densities. At 3 GHz, the rms confusion in an 8-arcsec full width at half-maximum beam is ˜ 1.2 μJy beam-1, and a radio background temperature ˜14 mK. Our counts are broadly consistent with published evolutionary models. With these results, we were also able to constrain the peak of the Euclidean normalized differential source count of any possible new radio populations that would contribute to the cosmic radio background down to 50 nJy.
Directory of Open Access Journals (Sweden)
MA. Lendita Kryeziu
2015-06-01
Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.
Logistic regression for dichotomized counts.
Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W
2016-12-01
Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.
International Nuclear Information System (INIS)
Winterflood, A.H.
1980-01-01
In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)
Full Text Available ... Like this video? Sign in to make your opinion count. Sign in 131 2 Don't like this video? Sign in to make your opinion count. Sign in 3 Loading... Loading... Transcript The ...
Coplestone-Loomis, Lenny
1981-01-01
Pumpkin seeds are counted after students convert pumpkins to jack-o-lanterns. Among the activities involved, pupils learn to count by 10s, make estimates, and to construct a visual representation of 1,000. (MP)
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
International Nuclear Information System (INIS)
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
Correction for decay during counting in gamma spectrometry
International Nuclear Information System (INIS)
Nir-El, Y.
2013-01-01
A basic result in gamma spectrometry is the count rate of a relevant peak. Correction for decay during counting and expressing the count rate at the beginning of the measurement can be done by a multiplicative factor that is derived from integrating the count rate over time. The counting time substituted in this factor must be the live time, whereas the use of the real-time is an error that underestimates the count rate by about the dead-time (DT) (in percentage). This error of underestimation of the count rate is corroborated in the measurement of a nuclide with a high DT. The present methodology is not applicable in systems that include a zero DT correction function. (authors)
Problems and precision of the alpha scintillation radon counting system
International Nuclear Information System (INIS)
Lucas, H.F.; Markuu, F.
1985-01-01
Variations in efficiency as large as 3% have been found for radon scintillation counting systems in which the photomultiplier tubes are sensitive to the thermoluminescent photons emitted by the scintillator after exposure to light or for which the resolution has deteriorated. The additional standard deviation caused by counting a radon chamber on multiple counting systems has been evaluated and the effect, if present, did not exceed about 0.1%. The chambers have been calibrated for the measurement of radon in air, and the standard deviation was equal to statistical counting error combined with a systematic error of 1.1%. 3 references, 2 figures, 2 tables
Minimum Tracking Error Volatility
Luca RICCETTI
2010-01-01
Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...
Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong
2011-01-01
Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.
Practical application of the theory of errors in measurement
International Nuclear Information System (INIS)
Anon.
1991-01-01
This chapter addresses the practical application of the theory of errors in measurement. The topics of the chapter include fixing on a maximum desired error, selecting a maximum error, the procedure for limiting the error, utilizing a standard procedure, setting specifications for a standard procedure, and selecting the number of measurements to be made
Scintillation counter, maximum gamma aspect
International Nuclear Information System (INIS)
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Regression Models For Multivariate Count Data.
Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei
2017-01-01
Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.
Simultaneous maximum a posteriori longitudinal PET image reconstruction
Ellis, Sam; Reader, Andrew J.
2017-09-01
Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.
Monitoring Milk Somatic Cell Counts
Directory of Open Access Journals (Sweden)
Gheorghe Şteţca
2014-11-01
Full Text Available The presence of somatic cells in milk is a widely disputed issue in milk production sector. The somatic cell counts in raw milk are a marker for the specific cow diseases such as mastitis or swollen udder. The high level of somatic cells causes physical and chemical changes to milk composition and nutritional value, and as well to milk products. Also, the mastitic milk is not proper for human consumption due to its contribution to spreading of certain diseases and food poisoning. According to these effects, EU Regulations established the maximum threshold of admitted somatic cells in raw milk to 400000 cells / mL starting with 2014. The purpose of this study was carried out in order to examine the raw milk samples provided from small farms, industrial type farms and milk processing units. There are several ways to count somatic cells in milk but the reference accepted method is the microscopic method described by the SR EN ISO 13366-1/2008. Generally samples registered values in accordance with the admissible limit. By periodical monitoring of the somatic cell count, certain technological process issues are being avoided and consumer’s health ensured.
Platelet Count and Plateletcrit
African Journals Online (AJOL)
strated that neonates with late onset sepsis (bacteremia after 3 days of age) had a dramatic increase in MPV and. PDW18. We hypothesize that as the MPV and PDW increase and platelet count and PCT decrease in sick children, intui- tively, the ratio of MPV to PCT; MPV to Platelet count,. PDW to PCT, PDW to platelet ...
Directory of Open Access Journals (Sweden)
Phillip P. Allen
2014-05-01
Full Text Available Techniques that analyze biological remains from sediment sequences for environmental reconstructions are well established and widely used. Yet, identifying, counting, and recording biological evidence such as pollen grains remain a highly skilled, demanding, and time-consuming task. Standard procedure requires the classification and recording of between 300 and 500 pollen grains from each representative sample. Recording the data from a pollen count requires significant effort and focused resources from the palynologist. However, when an adaptation to the recording procedure is utilized, efficiency and time economy improve. We describe EcoCount, which represents a development in environmental data recording procedure. EcoCount is a voice activated fully customizable digital count sheet that allows the investigator to continuously interact with a field of view during the data recording. Continuous viewing allows the palynologist the opportunity to remain engaged with the essential task, identification, for longer, making pollen counting more efficient and economical. EcoCount is a versatile software package that can be used to record a variety of environmental evidence and can be installed onto different computer platforms, making the adoption by users and laboratories simple and inexpensive. The user-friendly format of EcoCount allows any novice to be competent and functional in a very short time.
Full Text Available ... starting stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with ... ads? Get YouTube Red. Working... Not now Try it free Find out why Close Clean Hands Count ...
Schattschneider, Doris
1991-01-01
Provided are examples from many domains of mathematics that illustrate the Fubini Principle in its discrete version: the value of a summation over a rectangular array is independent of the order of summation. Included are: counting using partitions as in proof by pictures, combinatorial arguments, indirect counting as in the inclusion-exclusion…
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
International Nuclear Information System (INIS)
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
1989-01-01
001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Full Text Available ... Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed Unsubscribe 65K ...
Full Text Available ... Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed Unsubscribe 66K ...
Department of Housing and Urban Development — This report displays the data communities reported to HUD about the nature of their dedicated homeless inventory, referred to as their Housing Inventory Count (HIC)....
Scintillation counting apparatus
International Nuclear Information System (INIS)
Noakes, J.E.
1978-01-01
Apparatus is described for the accurate measurement of radiation by means of scintillation counters and in particular for the liquid scintillation counting of both soft beta radiation and gamma radiation. Full constructional and operating details are given. (UK)
Allegheny County Traffic Counts
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Traffic sensors at over 1,200 locations in Allegheny County collect vehicle counts for the Pennsylvania Department of Transportation. Data included in the Health...
Levin,Oscar; Roberts, Gerri M.
2013-01-01
To understand better some of the classic knights and knaves puzzles, we count them. Doing so reveals a surprising connection between puzzles and solutions, and highlights some beautiful combinatorial identities.
Full Text Available ... why Close Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed ...
Full Text Available ... stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. Working... Not now Try it free Find ...
Lagarias, Jeffrey C.; Rains, Eric; Vanderbei, Robert J.
2001-01-01
The Kruskal Count is a card trick invented by Martin J. Kruskal in which a magician "guesses" a card selected by a subject according to a certain counting procedure. With high probability the magician can correctly "guess" the card. The success of the trick is based on a mathematical principle related to coupling methods for Markov chains. This paper analyzes in detail two simplified variants of the trick and estimates the probability of success. The model predictions are compared with simula...
The effect of event shape on centroiding in photon counting detectors
International Nuclear Information System (INIS)
Kawakami, Hajime; Bone, David; Fordham, John; Michel, Raul
1994-01-01
High resolution, CCD readout, photon counting detectors employ simple centroiding algorithms for defining the spatial position of each detected event. The accuracy of centroiding is very dependent upon a number of parameters including the profile, energy and width of the intensified event. In this paper, we provide an analysis of how the characteristics of an intensified event change as the input count rate increases and the consequent effect on centroiding. The changes in these parameters are applied in particular to the MIC photon counting detector developed at UCL for ground and space based astronomical applications. This detector has a maximum format of 3072x2304 pixels permitting its use in the highest resolution applications. Individual events, at light level from 5 to 1000k events/s over the detector area, were analysed. It was found that both the asymmetry and width of event profiles were strongly dependent upon the energy of the intensified event. The variation in profile then affected the centroiding accuracy leading to loss of resolution. These inaccuracies have been quantified for two different 3 CCD pixel centroiding algorithms and one 2 pixel algorithm. The results show that a maximum error of less than 0.05 CCD pixel occurs with the 3 pixel algorithms and 0.1 CCD pixel for the 2 pixel algorithm. An improvement is proposed by utilising straight pore MCPs in the intensifier and a 70 μm air gap in front of the CCD. ((orig.))
Preverbal and verbal counting and computation.
Gallistel, C R; Gelman, R
1992-08-01
We describe the preverbal system of counting and arithmetic reasoning revealed by experiments on numerical representations in animals. In this system, numerosities are represented by magnitudes, which are rapidly but inaccurately generated by the Meck and Church (1983) preverbal counting mechanism. We suggest the following. (1) The preverbal counting mechanism is the source of the implicit principles that guide the acquisition of verbal counting. (2) The preverbal system of arithmetic computation provides the framework for the assimilation of the verbal system. (3) Learning to count involves, in part, learning a mapping from the preverbal numerical magnitudes to the verbal and written number symbols and the inverse mappings from these symbols to the preverbal magnitudes. (4) Subitizing is the use of the preverbal counting process and the mapping from the resulting magnitudes to number words in order to generate rapidly the number words for small numerosities. (5) The retrieval of the number facts, which plays a central role in verbal computation, is mediated via the inverse mappings from verbal and written numbers to the preverbal magnitudes and the use of these magnitudes to find the appropriate cells in tabular arrangements of the answers. (6) This model of the fact retrieval process accounts for the salient features of the reaction time differences and error patterns revealed by experiments on mental arithmetic. (7) The application of verbal and written computational algorithms goes on in parallel with, and is to some extent guided by, preverbal computations, both in the child and in the adult.
Automated vehicle counting using image processing and machine learning
Meany, Sean; Eskew, Edward; Martinez-Castro, Rosana; Jang, Shinae
2017-04-01
Vehicle counting is used by the government to improve roadways and the flow of traffic, and by private businesses for purposes such as determining the value of locating a new store in an area. A vehicle count can be performed manually or automatically. Manual counting requires an individual to be on-site and tally the traffic electronically or by hand. However, this can lead to miscounts due to factors such as human error A common form of automatic counting involves pneumatic tubes, but pneumatic tubes disrupt traffic during installation and removal, and can be damaged by passing vehicles. Vehicle counting can also be performed via the use of a camera at the count site recording video of the traffic, with counting being performed manually post-recording or using automatic algorithms. This paper presents a low-cost procedure to perform automatic vehicle counting using remote video cameras with an automatic counting algorithm. The procedure would utilize a Raspberry Pi micro-computer to detect when a car is in a lane, and generate an accurate count of vehicle movements. The method utilized in this paper would use background subtraction to process the images and a machine learning algorithm to provide the count. This method avoids fatigue issues that are encountered in manual video counting and prevents the disruption of roadways that occurs when installing pneumatic tubes
Energy Technology Data Exchange (ETDEWEB)
Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-04
We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B_{0} is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB_{0}/B_{0}, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
TasselNet: counting maize tassels in the wild via local counts regression network.
Lu, Hao; Cao, Zhiguo; Xiao, Yang; Zhuang, Bohan; Shen, Chunhua
2017-01-01
performance, with a mean absolute error of 6.6 and a mean squared error of 9.6 averaged over 8 test sequences. TasselNet can achieve robust in-field counting of maize tassels with a relatively high degree of accuracy. Our experimental evaluations also suggest several good practices for practitioners working on maize-tassel-like counting problems. It is worth noting that, though the counting errors have been greatly reduced by TasselNet, in-field counting of maize tassels remains an open and unsolved problem.
TasselNet: counting maize tassels in the wild via local counts regression network
Directory of Open Access Journals (Sweden)
Hao Lu
2017-11-01
margins and achieves the overall best counting performance, with a mean absolute error of 6.6 and a mean squared error of 9.6 averaged over 8 test sequences. Conclusions TasselNet can achieve robust in-field counting of maize tassels with a relatively high degree of accuracy. Our experimental evaluations also suggest several good practices for practitioners working on maize-tassel-like counting problems. It is worth noting that, though the counting errors have been greatly reduced by TasselNet, in-field counting of maize tassels remains an open and unsolved problem.
Symptoms Low white blood cell count By Mayo Clinic Staff A low white blood cell count (leukopenia) is a decrease ... of white blood cell (neutrophil). The definition of low white blood cell count varies from one medical ...
Accuracy in activation analysis: count rate effects
International Nuclear Information System (INIS)
Lindstrom, R.M.; Fleming, R.F.
1980-01-01
The accuracy inherent in activation analysis is ultimately limited by the uncertainty of counting statistics. When careful attention is paid to detail, several workers have shown that all systematic errors can be reduced to an insignificant fraction of the total uncertainty, even when the statistical limit is well below one percent. A matter of particular importance is the reduction of errors due to high counting rate. The loss of counts due to random coincidence (pulse pileup) in the amplifier and to digitization time in the ADC may be treated as a series combination of extending and non-extending dead times, respectively. The two effects are experimentally distinct. Live timer circuits in commercial multi-channel analyzers compensate properly for ADC dead time for long-lived sources, but not for pileup. Several satisfactory solutions are available, including pileup rejection and dead time correction circuits, loss-free ADCs, and computed corrections in a calibrated system. These methods are sufficiently reliable and well understood that a decaying source can be measured routinely with acceptably small errors at a dead time as high as 20 percent
Alpha scintillation radon counting
International Nuclear Information System (INIS)
Lucas, H.F. Jr.
1977-01-01
Radon counting chambers which utilize the alpha-scintillation properties of silver activated zinc sulfide are simple to construct, have a high efficiency, and, with proper design, may be relatively insensitive to variations in the pressure or purity of the counter filling. Chambers which were constructed from glass, metal, or plastic in a wide variety of shapes and sizes were evaluated for the accuracy and the precision of the radon counting. The principles affecting the alpha-scintillation radon counting chamber design and an analytic system suitable for a large scale study of the 222 Rn and 226 Ra content of either air or other environmental samples are described. Particular note is taken of those factors which affect the accuracy and the precision of the method for monitoring radioactivity around uranium mines
Principles of correlation counting
International Nuclear Information System (INIS)
Mueller, J.W.
1975-01-01
A review is given of the various applications which have been made of correlation techniques in the field of nuclear physics, in particular for absolute counting. Whereas in most cases the usual coincidence method will be preferable for its simplicity, correlation counting may be the only possible approach in such cases where the two radiations of the cascade cannot be well separated or when there is a longliving intermediate state. The measurement of half-lives and of count rates of spurious pulses is also briefly discussed. The various experimental situations lead to different ways the correlation method is best applied (covariance technique with one or with two detectors, application of correlation functions, etc.). Formulae are given for some simple model cases, neglecting dead-time corrections
Interpretation of galaxy counts
International Nuclear Information System (INIS)
Tinsely, B.M.
1980-01-01
New models are presented for the interpretation of recent counts of galaxies to 24th magnitude, and predictions are shown to 28th magnitude for future comparison with data from the Space Telescope. The results supersede earlier, more schematic models by the author. Tyson and Jarvis found in their counts a ''local'' density enhancement at 17th magnitude, on comparison with the earlier models; the excess is no longer significant when a more realistic mixture of galaxy colors is used. Bruzual and Kron's conclusion that Kron's counts show evidence for evolution at faint magnitudes is confirmed, and it is predicted that some 23d magnitude galaxies have redshifts greater than unity. These may include spheroidal systems, elliptical galaxies, and the bulges of early-type spirals and S0's, seen during their primeval rapid star formation
Computerized radioautographic grain counting
International Nuclear Information System (INIS)
McKanna, J.A.; Casagrande, V.A.
1985-01-01
In recent years, radiolabeling techniques have become fundamental assays in physiology and biochemistry experiments. They also have assumed increasingly important roles in morphologic studies. Characteristically, radioautographic analysis of structure has been qualitative rather than quantitative, however, microcomputers have opened the door to several methods for quantifying grain counts and density. The overall goal of this chapter is to describe grain counting using the Bioquant, an image analysis package based originally on the Apple II+, and now available for several popular microcomputers. The authors discuss their image analysis procedures by applying them to a study of development in the central nervous system
Energy Technology Data Exchange (ETDEWEB)
Soeker, H [Deutsches Windenergie-Institut (Germany)
1996-09-01
As state of the art method the rainflow counting technique is presently applied everywhere in fatigue analysis. However, the author feels that the potential of the technique is not fully recognized in wind energy industries as it is used, most of the times, as a mere data reduction technique disregarding some of the inherent information of the rainflow counting results. The ideas described in the following aim at exploitation of this information and making it available for use in the design and verification process. (au)
International Nuclear Information System (INIS)
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
International Nuclear Information System (INIS)
Daud, M.Y.; Qazi, R.A.
2015-01-01
Pakistan is a resource limited society and gold standard parameters to monitor HIV disease activity are very costly. The objective of the study was to evaluate total lymphocyte count (TLC) as a surrogate to CD4 count to monitor disease activity in HIV/AIDS in resource limited society. Methods: This cross sectional study was carried out at HIV/AIDS treatment centre, Pakistan Institute of Medical Sciences (PIMS), Islamabad. A total of seven hundred and seventy four (774) HIV positive patients were enrolled in this study, and their CD4 count and total lymphocyte count were checked to find any correlation between the two by using Spearman ranked correlation coefficient. Results: The mean CD4 count was (434.30 ± 269.23), with minimum CD4 count of (9.00), and maximum of (1974.00). The mean total lymphocyte count (TLC) was (6764.0052 ± 2364.02) with minimum TLC (1200.00) and maximum TLC was (20200.00). Using the Pearson's correlation (r) there was a significant and positive correlation between TLC and CD4 count. (r2=0.127 and p=0.000) at 0.01 level. Conclusion: Our study showed a significant positive correlation between CD4 count and total lymphocyte count (TLC), so TLC can be used as a marker of disease activity in HIV infected patients. (author)
Atmospheric mold spore counts in relation to meteorological parameters
Katial, R. K.; Zhang, Yiming; Jones, Richard H.; Dyer, Philip D.
Fungal spore counts of Cladosporium, Alternaria, and Epicoccum were studied during 8 years in Denver, Colorado. Fungal spore counts were obtained daily during the pollinating season by a Rotorod sampler. Weather data were obtained from the National Climatic Data Center. Daily averages of temperature, relative humidity, daily precipitation, barometric pressure, and wind speed were studied. A time series analysis was performed on the data to mathematically model the spore counts in relation to weather parameters. Using SAS PROC ARIMA software, a regression analysis was performed, regressing the spore counts on the weather variables assuming an autoregressive moving average (ARMA) error structure. Cladosporium was found to be positively correlated (Pmodel was derived for Cladosporium spore counts using the annual seasonal cycle and significant weather variables. The model for Alternaria and Epicoccum incorporated the annual seasonal cycle. Fungal spore counts can be modeled by time series analysis and related to meteorological parameters controlling for seasonallity; this modeling can provide estimates of exposure to fungal aeroallergens.
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 824 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 409,492 ...
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 786 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 413,702 ...
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 414 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson ...
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 869 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 460 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson ...
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 741 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 029 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 411,974 ...
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 396 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson ...
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 094 views 1:19 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 411,974 ...
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 319 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 585 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 413,097 ...
Detection and counting systems
International Nuclear Information System (INIS)
Abreu, M.A.N. de
1976-01-01
Detection devices based on gaseous ionization are analysed, such as: electroscopes ionization chambers, proportional counters and Geiger-Mueller counters. Scintillation methods are also commented. A revision of the basic concepts in electronics is done and the main equipment for counting is detailed. In the study of gama spectrometry, scintillation and semiconductor detectors are analysed [pt
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 384 views 1:19 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson ...
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 285 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 033 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...
... htm. (2004 Summer). Immature Reticulocyte Fraction(IRF). The Pathology Center Newsletter v9(1). [On-line information]. Available ... Company, Philadelphia, PA [18th Edition]. Levin, M. (2007 March 8, Updated). Reticulocyte Count. MedlinePlus Medical Encyclopedia [On- ...
Full Text Available ... is starting stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos ... empower patients to play a role in their care by asking or reminding healthcare providers to clean ...
Radiation intensity counting system
International Nuclear Information System (INIS)
Peterson, R.J.
1982-01-01
A method is described of excluding the natural dead time of the radiation detector (or eg Geiger-Mueller counter) in a ratemeter counting circuit, thus eliminating the need for dead time corrections. Using a pulse generator an artificial dead time is introduced which is longer than the natural dead time of the detector. (U.K.)
Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 043 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 411,292 ...
... GO About MedlinePlus Site Map FAQs Customer Support Health Topics Drugs & Supplements Videos & Tools Español You Are Here: Home → Medical Encyclopedia → Calorie count - fast food URL of this page: //medlineplus.gov/ency/patientinstructions/ ...
Fast counting electronics for neutron coincidence counting
International Nuclear Information System (INIS)
Swansen, J.E.
1987-01-01
This patent describes a high speed circuit for accurate neutron coincidence counting comprising: neutron detecting means for providing an above-threshold signal upon neutron detection; amplifying means inputted by the neutron detecting means for providing a pulse output having a pulse width of about 0.5 microseconds upon the input of each above threshold signal; digital processing means inputted by the pulse output of the amplifying means for generating a pulse responsive to each input pulse from the amplifying means and having a pulse width of about 50 nanoseconds effective for processing an expected neutron event rate of about 1 Mpps: pulse stretching means inputted by the digital processing means for producing a pulse having a pulse width of several milliseconds for each pulse received form the digital processing means; visual indicating means inputted by the pulse stretching means for producing a visual output for each pulse received from the digital processing means; and derandomizing means effective to receive the 50 ns neutron event pulses from the digital processing means for storage at a rate up to the neutron event rate of 1 Mpps and having first counter means for storing the input neutron event pulses
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
When Errors Count: An EEG Study on Numerical Error Monitoring under Performance Pressure
Schillinger, Frieder L.; De Smedt, Bert; Grabner, Roland H.
2016-01-01
In high-stake tests, students often display lower achievements than expected based on their skill level--a phenomenon known as choking under pressure. This imposes a serious problem for many students, especially for test-anxious individuals. Among school subjects, mathematics has been shown to be particularly vulnerable to choking. To succeed in a…
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
Directory of Open Access Journals (Sweden)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Comparison of Prediction-Error-Modelling Criteria
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
International Nuclear Information System (INIS)
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Practical, Reliable Error Bars in Quantum Tomography
Faist, Philippe; Renner, Renato
2015-01-01
Precise characterization of quantum devices is usually achieved with quantum tomography. However, most methods which are currently widely used in experiments, such as maximum likelihood estimation, lack a well-justified error analysis. Promising recent methods based on confidence regions are difficult to apply in practice or yield error bars which are unnecessarily large. Here, we propose a practical yet robust method for obtaining error bars. We do so by introducing a novel representation of...
Li, Yue (Inventor); Bruck, Jehoshua (Inventor)
2018-01-01
A data device includes a memory having a plurality of memory cells configured to store data values in accordance with a predetermined rank modulation scheme that is optional and a memory controller that receives a current error count from an error decoder of the data device for one or more data operations of the flash memory device and selects an operating mode for data scrubbing in accordance with the received error count and a program cycles count.
Modeling coherent errors in quantum error correction
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
A new stratification of mourning dove call-count routes
Blankenship, L.H.; Humphrey, A.B.; MacDonald, D.
1971-01-01
The mourning dove (Zenaidura macroura) call-count survey is a nationwide audio-census of breeding mourning doves. Recent analyses of the call-count routes have utilized a stratification based upon physiographic regions of the United States. An analysis of 5 years of call-count data, based upon stratification using potential natural vegetation, has demonstrated that this uew stratification results in strata with greater homogeneity than the physiographic strata, provides lower error variance, and hence generates greatet precision in the analysis without an increase in call-count routes. Error variance was reduced approximately 30 percent for the contiguous United States. This indicates that future analysis based upon the new stratification will result in an increased ability to detect significant year-to-year changes.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.
Klumpp, John; Brandl, Alexander
2015-03-01
A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.
International Nuclear Information System (INIS)
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
National Oceanic and Atmospheric Administration, Department of Commerce — Fish egg counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets], and...
Learning from prescribing errors
Dean, B
2002-01-01
The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...
Counting statistics in radioactivity measurements
International Nuclear Information System (INIS)
Martin, J.
1975-01-01
The application of statistical methods to radioactivity measurement problems is analyzed in several chapters devoted successively to: the statistical nature of radioactivity counts; the application to radioactive counting of two theoretical probability distributions, Poisson's distribution law and the Laplace-Gauss law; true counting laws; corrections related to the nature of the apparatus; statistical techniques in gamma spectrometry [fr
International Nuclear Information System (INIS)
Brewster, K.
2002-01-01
Full text: This study was designed to investigate anecdotal evidence that residual Sestamibi (MIBI) activity vaned in certain situations. For rest studies different brands of syringes were tested to see if the residuals varied. The period of time MIBI doses remained in the syringe between dispensing and injection was also considered as a possible source of increased residual counts. Stress Mibi syringe residual activities were measured to assess if the method of stress test affected residual activity. MIBI was reconstituted using 13 Gbq of Technetium in 3mls of normal saline then boiled for 10 minutes. Doses were dispensed according to department protocol and injected via cannula. Residual syringes were collected for three syringe types. In each case the barrel and plunger were measured separately. As the syringe is flushed during the exercise stress test and not the pharmacological stress test the chosen method was recorded. No relationship was demonstrated between the time MIBI remained in a syringe prior to injection and residual activity. Residual activity was not affected by method of stress test used. Actual injected activity can be calculated if the amount of activity remaining in the syringe post injection is known. Imaging time can be adjusted for residual activity to optimise count statistics. Preliminary results in this study indicate there is no difference in residual activity between syringe brands.Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc
Fast maximum likelihood estimation of mutation rates using a birth-death process.
Wu, Xiaowei; Zhu, Hongxiao
2015-02-07
Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.
Tests for detecting overdispersion in models with measurement error in covariates.
Yang, Yingsi; Wong, Man Yu
2015-11-30
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.
Areces, Carlos; Hoffmann, Guillaume; Denis, Alexandre
We present a modal language that includes explicit operators to count the number of elements that a model might include in the extension of a formula, and we discuss how this logic has been previously investigated under different guises. We show that the language is related to graded modalities and to hybrid logics. We illustrate a possible application of the language to the treatment of plural objects and queries in natural language. We investigate the expressive power of this logic via bisimulations, discuss the complexity of its satisfiability problem, define a new reasoning task that retrieves the cardinality bound of the extension of a given input formula, and provide an algorithm to solve it.
International Nuclear Information System (INIS)
Buckman, S.M.; Ius, D.
1996-01-01
This paper reports on the development of a digital coincidence-counting system which comprises a custom-built data acquisition card and associated PC software. The system has been designed to digitise the pulse-trains from two radiation detectors at a rate of 20 MSamples/s with 12-bit resolution. Through hardware compression of the data, the system can continuously record both individual pulse-shapes and the time intervals between pulses. Software-based circuits are used to process the stored pulse trains. These circuits are constructed simply by linking together icons representing various components such as coincidence mixers, time delays, single-channel analysers, deadtimes and scalers. This system enables a pair of pulse trains to be processed repeatedly using any number of different methods. Some preliminary results are presented in order to demonstrate the versatility and efficiency of this new method. (orig.)
Buckman, S. M.; Ius, D.
1996-02-01
This paper reports on the development of a digital coincidence-counting system which comprises a custom-built data acquisition card and associated PC software. The system has been designed to digitise the pulse-trains from two radiation detectors at a rate of 20 MSamples/s with 12-bit resolution. Through hardware compression of the data, the system can continuously record both individual pulse-shapes and the time intervals between pulses. Software-based circuits are used to process the stored pulse trains. These circuits are constructed simply by linking together icons representing various components such as coincidence mixers, time delays, single-channel analysers, deadtimes and scalers. This system enables a pair of pulse trains to be processed repeatedly using any number of different methods. Some preliminary results are presented in order to demonstrate the versatility and efficiency of this new method.
International Nuclear Information System (INIS)
Anon.
1991-01-01
This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements
International Nuclear Information System (INIS)
Picard, R.R.
1989-01-01
Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process
Martínez-Legaz, Juan Enrique; Soubeyran, Antoine
2003-01-01
We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.
Counting statistics in low level radioactivity measurements fluctuating counting efficiency
International Nuclear Information System (INIS)
Pazdur, M.F.
1976-01-01
A divergence between the probability distribution of the number of nuclear disintegrations and the number of observed counts, caused by counting efficiency fluctuation, is discussed. The negative binominal distribution is proposed to describe the probability distribution of the number of counts, instead of Poisson distribution, which is assumed to hold for the number of nuclear disintegrations only. From actual measurements the r.m.s. amplitude of counting efficiency fluctuation is estimated. Some consequences of counting efficiency fluctuation are investigated and the corresponding formulae are derived: (1) for detection limit as a function of the number of partial measurements and the relative amplitude of counting efficiency fluctuation, and (2) for optimum allocation of the number of partial measurements between sample and background. (author)
Generalized Gaussian Error Calculus
Grabe, Michael
2010-01-01
For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...
Medication errors: prescribing faults and prescription errors.
Velo, Giampaolo P; Minuz, Pietro
2009-06-01
1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.
Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Application of neutron multiplicity counting to waste assay
Energy Technology Data Exchange (ETDEWEB)
Pickrell, M.M.; Ensslin, N. [Los Alamos National Lab., NM (United States); Sharpe, T.J. [North Carolina State Univ., Raleigh, NC (United States)
1997-11-01
This paper describes the use of a new figure of merit code that calculates both bias and precision for coincidence and multiplicity counting, and determines the optimum regions for each in waste assay applications. A {open_quotes}tunable multiplicity{close_quotes} approach is developed that uses a combination of coincidence and multiplicity counting to minimize the total assay error. An example is shown where multiplicity analysis is used to solve for mass, alpha, and multiplication and tunable multiplicity is shown to work well. The approach provides a method for selecting coincidence, multiplicity, or tunable multiplicity counting to give the best assay with the lowest total error over a broad spectrum of assay conditions. 9 refs., 6 figs.
Budden, A. E.; Abrams, S.; Chodacki, J.; Cruse, P.; Fenner, M.; Jones, M. B.; Lowenberg, D.; Rueda, L.; Vieglais, D.
2017-12-01
The impact of research has traditionally been measured by citations to journal publications and used extensively for evaluation and assessment in academia, but this process misses the impact and reach of data and software as first-class scientific products. For traditional publications, Article-Level Metrics (ALM) capture the multitude of ways in which research is disseminated and used, such as references and citations within social media and other journal articles. Here we present on the extension of usage and citation metrics collection to include other artifacts of research, namely datasets. The Make Data Count (MDC) project will enable measuring the impact of research data in a manner similar to what is currently done with publications. Data-level metrics (DLM) are a multidimensional suite of indicators measuring the broad reach and use of data as legitimate research outputs. By making data metrics openly available for reuse in a number of different ways, the MDC project represents an important first step on the path towards the full integration of data metrics into the research data management ecosystem. By assuring researchers that their contributions to scholarly progress represented by data corpora are acknowledged, data level metrics provide a foundation for streamlining the advancement of knowledge by actively promoting desirable best practices regarding research data management, publication, and sharing.
LAWRENCE RADIATION LABORATORY COUNTING HANDBOOK
Energy Technology Data Exchange (ETDEWEB)
Group, Nuclear Instrumentation
1966-10-01
The Counting Handbook is a compilation of operational techniques and performance specifications on counting equipment in use at the Lawrence Radiation Laboratory, Berkeley. Counting notes have been written from the viewpoint of the user rather than that of the designer or maintenance man. The only maintenance instructions that have been included are those that can easily be performed by the experimenter to assure that the equipment is operating properly.
Social Security Administration — Staging Instance for all SUMs Counts related projects including: Redeterminations/Limited Issue, Continuing Disability Resolution, CDR Performance Measures, Initial...
Low-priced, time-saving, reliable and stable LR-115 counting system
International Nuclear Information System (INIS)
Tchorz-Trzeciakiewicz, D.E.
2015-01-01
Nuclear alpha particles leave etches (tracks) when they hit the surface of a LR-115 detector. The density of these tracks is used to measure radon concentration. Counting these tracks by human sense is tedious and time-consuming procedure and may introduce counting error, whereas most available automatic and semiautomatic counting systems are expensive or complex. An uncomplicated, robust, reliable and stable counting system using freely available on the Internet software as Digimizer™ and PhotoScape was developed and proposed. The effectiveness of the proposed procedure was evaluated by comparing the amount of tracks counted by software with the amount of tracks counted manually for 223 detectors. The percentage error for each analysed detector was obtained as a difference between automatic and manual counts divided by manual count. For more than 97% of detectors, the percentage errors oscillated between −3% and 3%. - Highlights: • Semiautomatic, uncomplicated procedure was proposed to count the amount of alpha tracks. • Freely available software on the Internet used as alpha tracks counting system for LR-115. • LR-115 detectors used to measure radon concentration and radon exhalation rate
Maximum likelihood convolutional decoding (MCD) performance due to system losses
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy.
Cohen, E A K; Ober, R J
2013-12-15
We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data.
Limit of sensitivity of low-background counting equipment
International Nuclear Information System (INIS)
Homann, S.G.
1991-01-01
The Hazards Control Department's Radiological Measurements Laboratory (RML) analyzes many types of sample media in support of the Laboratory's health and safety program. The Department has determined that the equation for the minimum limit of sensitivity, MDC(α,β) = 2.71 + 3.29 (r b t s ) 1/2 is also adequate for RML counting systems with very-low-background levels. This paper reviews the normal distribution case and address the special case of determining the limit of sensitivity of a counting system when the background count rate is well known and small. In the latter case, we must use an exact test procedure based on the binomial distribution. However, the error in using the normal distribution for calculating a detection system's limit of sensitivity is not significant even as the total observed number of counts approaches or equals zero. 2 refs., 4 figs
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
Energy Technology Data Exchange (ETDEWEB)
Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))
1990-01-01
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Parameters and error of a theoretical model
International Nuclear Information System (INIS)
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs
Road safety performance measures and AADT uncertainty from short-term counts.
Milligan, Craig; Montufar, Jeannette; Regehr, Jonathan; Ghanney, Bartholomew
2016-12-01
The objective of this paper is to enable better risk analysis of road safety performance measures by creating the first knowledge base on uncertainty surrounding annual average daily traffic (AADT) estimates when the estimates are derived by expanding short-term counts with the individual permanent counter method. Many road safety performance measures and performance models use AADT as an input. While there is an awareness that the input suffers from uncertainty, the uncertainty is not well known or accounted for. The paper samples data from a set of 69 permanent automatic traffic recorders in Manitoba, Canada, to simulate almost 2 million short-term counts over a five year period. These short-term counts are expanded to AADT estimates by transferring temporal information from a directly linked nearby permanent count control station, and the resulting AADT values are compared to a known reference AADT to compute errors. The impacts of five factors on AADT error are considered: length of short-term count, number of short-term counts, use of weekday versus weekend counts, distance from a count to its expansion control station, and the AADT at the count site. The mean absolute transfer error for expanded AADT estimates is 6.7%, and this value varied by traffic pattern group from 5% to 10.5%. Reference percentiles of the error distribution show that almost all errors are between -20% and +30%. Error decreases substantially by using a 48-h count instead of a 24-h count, and only slightly by using two counts instead of one. Weekday counts are superior to weekend counts, especially if the count is only 24h. Mean absolute transfer error increases with distance to control station (elasticity 0.121, p=0.001), and increases with AADT (elasticity 0.857, proad safety performance measures that use AADT as inputs. Analytical frameworks for such analysis exist but are infrequently used in road safety because the evidence base on AADT uncertainty is not well developed. Copyright
Prescription Errors in Psychiatry
African Journals Online (AJOL)
Arun Kumar Agnihotri
clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.
Track counting in radon dosimetry
International Nuclear Information System (INIS)
Fesenbeck, Ingo; Koehler, Bernd; Reichert, Klaus-Martin
2013-01-01
The newly developed, computer-controlled track counting system is capable of imaging and analyzing the entire area of nuclear track detectors. The high optical resolution allows a new analysis approach for the process of automated counting using digital image processing technologies. This way, higher exposed detectors can be evaluated reliably by an automated process as well. (orig.)
Implementation of a nonlinear filter for online nuclear counting
International Nuclear Information System (INIS)
Coulon, R.; Dumazert, J.; Kondrasovs, V.; Normand, S.
2016-01-01
Nuclear counting is a challenging task for nuclear instrumentation because of the stochastic nature of radioactivity. Event counting has to be processed and filtered to determine a stable count rate value and perform variation monitoring of the measured event. An innovative approach for nuclear counting is presented in this study, improving response time and maintaining count rate stability. Some nonlinear filters providing a local maximum likelihood estimation of the signal have been recently developed, which have been tested and compared with conventional linear filters. A nonlinear filter thus developed shows significant performance in terms of response time and measurement precision. The filter also presents the specificity of easy embedment into digital signal processor (DSP) electronics based on field-programmable gate arrays (FPGA) or microcontrollers, compatible with real-time requirements. © 2001 Elsevier Science. All rights reserved. - Highlights: • An efficient approach based on nonlinear filtering has been implemented. • The hypothesis test provides a local maximum likelihood estimation of the count rate. • The filter ensures an optimal compromise between precision and response time.
Error studies of Halbach Magnets
Energy Technology Data Exchange (ETDEWEB)
Brooks, S. [Brookhaven National Lab. (BNL), Upton, NY (United States)
2017-03-02
These error studies were done on the Halbach magnets for the CBETA “First Girder” as described in note [CBETA001]. The CBETA magnets have since changed slightly to the lattice in [CBETA009]. However, this is not a large enough change to significantly affect the results here. The QF and BD arc FFAG magnets are considered. For each assumed set of error distributions and each ideal magnet, 100 random magnets with errors are generated. These are then run through an automated version of the iron wire multipole cancellation algorithm. The maximum wire diameter allowed is 0.063” as in the proof-of-principle magnets. Initially, 32 wires (2 per Halbach wedge) are tried, then if this does not achieve 1e-4 level accuracy in the simulation, 48 and then 64 wires. By “1e-4 accuracy”, it is meant the FOM defined by √(Σ_{n≥sextupole} a_{n} ^{2}+b_{n} ^{2}) is less than 1 unit, where the multipoles are taken at the maximum nominal beam radius, R=23mm for these magnets. The algorithm initially uses 20 convergence interations. If 64 wires does not achieve 1e-4 accuracy, this is increased to 50 iterations to check for slow converging cases. There are also classifications for magnets that do not achieve 1e-4 but do achieve 1e-3 (FOM ≤ 10 units). This is technically within the spec discussed in the Jan 30, 2017 review; however, there will be errors in practical shimming not dealt with in the simulation, so it is preferable to do much better than the spec in the simulation.
Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm
Directory of Open Access Journals (Sweden)
S. Radhika
2016-04-01
Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.
International Nuclear Information System (INIS)
Metcalfe, N.; Shanks, T.; Fong, R.; Jones, L.R.
1991-01-01
Using the Prime Focus CCD Camera at the Isaac Newton Telescope we have determined the form of the B and R galaxy number-magnitude count relations in 12 independent fields for 21 m ccd m and 19 m ccd m 5. The average galaxy count relations lie in the middle of the wide range previously encompassed by photographic data. The field-to-field variation of the counts is small enough to define the faint (B m 5) galaxy count to ±10 per cent and this variation is consistent with that expected from galaxy clustering considerations. Our new data confirm that the B, and also the R, galaxy counts show evidence for strong galaxy luminosity evolution, and that the majority of the evolving galaxies are of moderately blue colour. (author)
Hanford whole body counting manual
International Nuclear Information System (INIS)
Palmer, H.E.; Brim, C.P.; Rieksts, G.A.; Rhoads, M.C.
1987-05-01
This document, a reprint of the Whole Body Counting Manual, was compiled to train personnel, document operation procedures, and outline quality assurance procedures. The current manual contains information on: the location, availability, and scope of services of Hanford's whole body counting facilities; the administrative aspect of the whole body counting operation; Hanford's whole body counting facilities; the step-by-step procedure involved in the different types of in vivo measurements; the detectors, preamplifiers and amplifiers, and spectroscopy equipment; the quality assurance aspect of equipment calibration and recordkeeping; data processing, record storage, results verification, report preparation, count summaries, and unit cost accounting; and the topics of minimum detectable amount and measurement accuracy and precision. 12 refs., 13 tabs
An corrective method to correct of the inherent flaw of the asynchronization direct counting circuit
International Nuclear Information System (INIS)
Wang Renfei; Liu Congzhan; Jin Yongjie; Zhang Zhi; Li Yanguo
2003-01-01
As a inherent flaw of the Asynchronization Direct Counting Circuit, the crosstalk, which is resulted from the randomicity of the time-signal always exists between two adjacent channels. In order to reduce the counting error derived from the crosstalk, the author propose an effective method to correct the flaw after analysing the mechanism of the crosstalk
Kartush, J M
1996-11-01
Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.
International Nuclear Information System (INIS)
Valentine, J.D.; Rana, A.E.
1996-01-01
The effect of approximating a continuous Gaussian distribution with histogrammed data are studied. The expressions for theoretical uncertainties in centroid and full-width at half maximum (FWHM), as determined by calculation of moments, are derived using the error propagation method for a histogrammed Gaussian distribution. The results are compared with the corresponding pseudo-experimental uncertainties for computer-generated histogrammed Gaussian peaks to demonstrate the effect of binning the data. It is shown that increasing the number of bins in the histogram improves the continuous distribution approximation. For example, a FWHM ≥ 9 and FWHM ≥ 12 bins are needed to reduce the pseudo-experimental standard deviation of FWHM to within ≥5% and ≥1%, respectively, of the theoretical value for a peak containing 10,000 counts. In addition, the uncertainties in the centroid and FWHM as a function of peak area are studied. Finally, Sheppard's correction is applied to partially correct for the binning effect
Development of a stained cell nuclei counting system
Timilsina, Niranjan; Moffatt, Christopher; Okada, Kazunori
2011-03-01
This paper presents a novel cell counting system which exploits the Fast Radial Symmetry Transformation (FRST) algorithm [1]. The driving force behind our system is a research on neurogenesis in the intact nervous system of Manduca Sexta or the Tobacco Hornworm, which was being studied to assess the impact of age, food and environment on neurogenesis. The varying thickness of the intact nervous system in this species often yields images with inhomogeneous background and inconsistencies such as varying illumination, variable contrast, and irregular cell size. For automated counting, such inhomogeneity and inconsistencies must be addressed, which no existing work has done successfully. Thus, our goal is to devise a new cell counting algorithm for the images with non-uniform background. Our solution adapts FRST: a computer vision algorithm which is designed to detect points of interest on circular regions such as human eyes. This algorithm enhances the occurrences of the stained-cell nuclei in 2D digital images and negates the problems caused by their inhomogeneity. Besides FRST, our algorithm employs standard image processing methods, such as mathematical morphology and connected component analysis. We have evaluated the developed cell counting system with fourteen digital images of Tobacco Hornworm's nervous system collected for this study with ground-truth cell counts by biology experts. Experimental results show that our system has a minimum error of 1.41% and mean error of 16.68% which is at least forty-four percent better than the algorithm without FRST.
International Nuclear Information System (INIS)
Palmer, H.E.
1985-03-01
A state-of-the-art radiation detector system consisting of six individually mounted intrinsic germanium planar detectors, each 20 cm 2 by 13 mm thick, mounted together such that the angle of the whole system can be changed to match the slope of the chest of the person being counted, is described. The sensitivity of the system for counting uranium and plutonium in vivo and the precedures used in calibrating the system are also described. Some results of counts done on uranium mill workers are presented. 15 figs., 2 tabs
Some target assay uncertainties for passive neutron coincidence counting
International Nuclear Information System (INIS)
Ensslin, N.; Langner, D.G.; Menlove, H.O.; Miller, M.C.; Russo, P.A.
1990-01-01
This paper provides some target assay uncertainties for passive neutron coincidence counting of plutonium metal, oxide, mixed oxide, and scrap and waste. The target values are based in part on past user experience and in part on the estimated results from new coincidence counting techniques that are under development. The paper summarizes assay error sources and the new coincidence techniques, and recommends the technique that is likely to yield the lowest assay uncertainty for a given material type. These target assay uncertainties are intended to be useful for NDA instrument selection and assay variance propagation studies for both new and existing facilities. 14 refs., 3 tabs
Roteta, Miguel; Peyres, Virginia; Rodríguez Barquero, Leonor; García-Toraño, Eduardo; Arenillas, Pablo; Balpardo, Christian; Rodrígues, Darío; Llovera, Roberto
2012-09-01
The radionuclide (68)Ga is one of the few positron emitters that can be prepared in-house without the use of a cyclotron. It disintegrates to the ground state of (68)Zn partially by positron emission (89.1%) with a maximum energy of 1899.1 keV, and partially by electron capture (10.9%). This nuclide has been standardized in the frame of a cooperation project between the Radionuclide Metrology laboratories from CIEMAT (Spain) and CNEA (Argentina). Measurements involved several techniques: 4πβ-γ coincidences, integral gamma counting and Liquid Scintillation Counting using the triple to double coincidence ratio and the CIEMAT/NIST methods. Given the short half-life of the radionuclide assayed, a direct comparison between results from both laboratories was excluded and a comparison of experimental efficiencies of similar NaI detectors was used instead. Copyright © 2012 Elsevier Ltd. All rights reserved.
Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation
DEFF Research Database (Denmark)
Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik
2017-01-01
The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...
Quality control methods in accelerometer data processing: identifying extreme counts.
Directory of Open Access Journals (Sweden)
Carly Rich
Full Text Available Accelerometers are designed to measure plausible human activity, however extremely high count values (EHCV have been recorded in large-scale studies. Using population data, we develop methodological principles for establishing an EHCV threshold, propose a threshold to define EHCV in the ActiGraph GT1M, determine occurrences of EHCV in a large-scale study, identify device-specific error values, and investigate the influence of varying EHCV thresholds on daily vigorous PA (VPA.We estimated quantiles to analyse the distribution of all accelerometer positive count values obtained from 9005 seven-year old children participating in the UK Millennium Cohort Study. A threshold to identify EHCV was derived by differentiating the quantile function. Data were screened for device-specific error count values and EHCV, and a sensitivity analysis conducted to compare daily VPA estimates using three approaches to accounting for EHCV.Using our proposed threshold of ≥ 11,715 counts/minute to identify EHCV, we found that only 0.7% of all non-zero counts measured in MCS children were EHCV; in 99.7% of these children, EHCV comprised < 1% of total non-zero counts. Only 11 MCS children (0.12% of sample returned accelerometers that contained negative counts; out of 237 such values, 211 counts were equal to -32,768 in one child. The medians of daily minutes spent in VPA obtained without excluding EHCV, and when using a higher threshold (≥19,442 counts/minute were, respectively, 6.2% and 4.6% higher than when using our threshold (6.5 minutes; p<0.0001.Quality control processes should be undertaken during accelerometer fieldwork and prior to analysing data to identify monitors recording error values and EHCV. The proposed threshold will improve the validity of VPA estimates in children's studies using the ActiGraph GT1M by ensuring only plausible data are analysed. These methods can be applied to define appropriate EHCV thresholds for different accelerometer models.
Complete Blood Count (For Parents)
... Kids Deal With Injections and Blood Tests Blood Culture Anemia Blood Test: Basic Metabolic Panel (BMP) Blood Test: Hemoglobin Basic Blood Chemistry Tests Word! Complete Blood Count (CBC) Medical Tests and Procedures ( ...
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — The Make My Trip Count (MMTC) commuter survey, conducted in September and October 2015 by GBA, the Pittsburgh 2030 District, and 10 other regional transportation...
Counting Triangles to Sum Squares
DeMaio, Joe
2012-01-01
Counting complete subgraphs of three vertices in complete graphs, yields combinatorial arguments for identities for sums of squares of integers, odd integers, even integers and sums of the triangular numbers.
Relationship of blood and milk cell counts with mastitic pathogens in Murrah buffaloes
Directory of Open Access Journals (Sweden)
C. Singh
2010-02-01
Full Text Available The present study was undertaken to see the effect of mastitic pathogens on the blood and milk counts of Murrah buffaloes. Milk and blood samples were collected from 9 mastitic Murrah buffaloes. The total leucocyte Counts (TLC and Differential leucocyte counts (DLC in blood were within normal range and there was a non-significant change in blood counts irrespective of different mastitic pathogens. Normal milk quarter samples had significantly (P<0.01 less Somatic cell counts (SCC. Lymphocytes were significantly higher in normal milk samples, whereas infected samples had a significant increase (P<0.01 in milk neutrophils. S. aureus infected buffaloes had maximum milk SCC, followed by E. coli and S. agalactiae. Influx of neutrophils in the buffalo mammary gland was maximum for S. agalactiae, followed by E.cli and S. aureus. The study indicated that level of mastitis had no affect on blood counts but it influenced the milk SCC of normal quarters.
Automation of Sample Transfer and Counting on Fast Neutron ActivationSystem
International Nuclear Information System (INIS)
Dewita; Budi-Santoso; Darsono
2000-01-01
The automation of sample transfer and counting were the transfer processof the sample to the activation and counting place which have been done byswitch (manually) previously, than being developed by automaticallyprogrammed logic instructions. The development was done by constructed theelectronics hardware and software for that communication. Transfer timemeasurement is on seconds and was done automatically with an error 1.6 ms.The counting and activation time were decided by the user on seconds andminutes, the execution error on minutes was 8.2 ms. This development systemwill be possible for measuring short half live elements and cyclic activationprocesses. (author)
The error in total error reduction.
Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R
2014-02-01
Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.
Counting Word Frequencies with Python
Directory of Open Access Journals (Sweden)
William J. Turkel
2012-07-01
Full Text Available Your list is now clean enough that you can begin analyzing its contents in meaningful ways. Counting the frequency of specific words in the list can provide illustrative data. Python has an easy way to count frequencies, but it requires the use of a new type of variable: the dictionary. Before you begin working with a dictionary, consider the processes used to calculate frequencies in a list.
Liquid scintillation counting of chlorophyll
International Nuclear Information System (INIS)
Fric, F.; Horickova, B.; Haspel-Horvatovic, E.
1975-01-01
A precise and reproducible method of liquid scintillation counting was worked out for measuring the radioactivity of 14 C-labelled chlorophyll a and chlorophyll b solutions without previous bleaching. The spurious count rate caused by luminescence of the scintillant-chlorophyll system is eliminated by using a suitable scintillant and by measuring the radioactivity at 4 to 8 0 C after an appropriate time of dark adaptation. Bleaching of the chlorophyll solutions is necessary only for measuring of very low radioactivity. (author)
Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano
2013-01-01
Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...
National Research Council Canada - National Science Library
Byrne, Michael D
2006-01-01
.... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...
International Nuclear Information System (INIS)
Wahlstroem, B.
1993-01-01
Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)
PVO VENUS ONMS BROWSE SUPRTHRML ION MAX COUNT RATE 12S V1.0
National Aeronautics and Space Administration — This data set contains PVO Neutral Mass Spectrometer superthermal ion data. Each record contains the maximum count rate per second in a 12 second period beginning...
Maximum Entropy in Drug Discovery
Directory of Open Access Journals (Sweden)
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
International Nuclear Information System (INIS)
Balpardo, C.; Capoulat, M.E.; Rodrigues, D.; Arenillas, P.
2010-01-01
The nuclide 241 Am decays by alpha emission to 237 Np. Most of the decays (84.6%) populate the excited level of 237 Np with energy of 59.54 keV. Digital coincidence counting was applied to standardize a solution of 241 Am by alpha-gamma coincidence counting with efficiency extrapolation. Electronic discrimination was implemented with a pressurized proportional counter and the results were compared with two other independent techniques: Liquid scintillation counting using the logical sum of double coincidences in a TDCR array and defined solid angle counting taking into account activity inhomogeneity in the active deposit. The results show consistency between the three methods within a limit of a 0.3%. An ampoule of this solution will be sent to the International Reference System (SIR) during 2009. Uncertainties were analysed and compared in detail for the three applied methods.
Calibration for plutonium-238 lung counting at Mound Laboratory
International Nuclear Information System (INIS)
Tomlinson, F.K.
1976-01-01
The lung counting facility at Mound Laboratory was calibrated for making plutonium-238 lung deposition assessments in the fall of 1969. Phoswich detectors have been used since that time; however, the technique of calibration has improved considerably. The current technique of calibrating the lung counter is described as well as the method of error analysis and determination of the minimum detectable activity. A Remab hybrid phantom is used along with an attenuation curve which is derived from plutonium loaded lungs and ground beef absorber measurements. The errors that are included in an analysis as well as those that are excluded are described. The method of calculating the minimum detectable activity is also included
Maximum likelihood window for time delay estimation
International Nuclear Information System (INIS)
Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup
2004-01-01
Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.
Metcalfe, Janet
2017-01-01
Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…
Design Study of an Incinerator Ash Conveyor Counting System - 13323
International Nuclear Information System (INIS)
Jaederstroem, Henrik; Bronson, Frazier
2013-01-01
A design study has been performed for a system that should measure the Cs-137 activity in ash from an incinerator. Radioactive ash, expected to consist of both Cs-134 and Cs-137, will be transported on a conveyor belt at 0.1 m/s. The objective of the counting system is to determine the Cs-137 activity and direct the ash to the correct stream after a diverter. The decision levels are ranging from 8000 to 400000 Bq/kg and the decision error should be as low as possible. The decision error depends on the total measurement uncertainty which depends on the counting statistics and the uncertainty in the efficiency of the geometry. For the low activity decision it is necessary to know the efficiency to be able to determine if the signal from the Cs-137 is above the minimum detectable activity and that it generates enough counts to reach the desired precision. For the higher activity decision the uncertainty of the efficiency needs to be understood to minimize decision errors. The total efficiency of the detector is needed to be able to determine if the detector will be able operate at the count rate at the highest expected activity. The design study that is presented in this paper describes how the objectives of the monitoring systems were obtained, the choice of detector was made and how ISOCS (In Situ Object Counting System) mathematical modeling was used to calculate the efficiency. The ISOCS uncertainty estimator (IUE) was used to determine which parameters of the ash was important to know accurately in order to minimize the uncertainty of the efficiency. The examined parameters include the height of the ash on the conveyor belt, the matrix composition and density and relative efficiency of the detector. (authors)
Design Study of an Incinerator Ash Conveyor Counting System - 13323
Energy Technology Data Exchange (ETDEWEB)
Jaederstroem, Henrik; Bronson, Frazier [Canberra Industries Inc., 800 Research Parkway Meriden CT 06450 (United States)
2013-07-01
A design study has been performed for a system that should measure the Cs-137 activity in ash from an incinerator. Radioactive ash, expected to consist of both Cs-134 and Cs-137, will be transported on a conveyor belt at 0.1 m/s. The objective of the counting system is to determine the Cs-137 activity and direct the ash to the correct stream after a diverter. The decision levels are ranging from 8000 to 400000 Bq/kg and the decision error should be as low as possible. The decision error depends on the total measurement uncertainty which depends on the counting statistics and the uncertainty in the efficiency of the geometry. For the low activity decision it is necessary to know the efficiency to be able to determine if the signal from the Cs-137 is above the minimum detectable activity and that it generates enough counts to reach the desired precision. For the higher activity decision the uncertainty of the efficiency needs to be understood to minimize decision errors. The total efficiency of the detector is needed to be able to determine if the detector will be able operate at the count rate at the highest expected activity. The design study that is presented in this paper describes how the objectives of the monitoring systems were obtained, the choice of detector was made and how ISOCS (In Situ Object Counting System) mathematical modeling was used to calculate the efficiency. The ISOCS uncertainty estimator (IUE) was used to determine which parameters of the ash was important to know accurately in order to minimize the uncertainty of the efficiency. The examined parameters include the height of the ash on the conveyor belt, the matrix composition and density and relative efficiency of the detector. (authors)
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Hanford whole body counting manual
International Nuclear Information System (INIS)
Palmer, H.E.; Rieksts, G.A.; Lynch, T.P.
1990-06-01
This document describes the Hanford Whole Body Counting Program as it is administered by Pacific Northwest Laboratory (PNL) in support of the US Department of Energy--Richland Operations Office (DOE-RL) and its Hanford contractors. Program services include providing in vivo measurements of internally deposited radioactivity in Hanford employees (or visitors). Specific chapters of this manual deal with the following subjects: program operational charter, authority, administration, and practices, including interpreting applicable DOE Orders, regulations, and guidance into criteria for in vivo measurement frequency, etc., for the plant-wide whole body counting services; state-of-the-art facilities and equipment used to provide the best in vivo measurement results possible for the approximately 11,000 measurements made annually; procedures for performing the various in vivo measurements at the Whole Body Counter (WBC) and related facilities including whole body counts; operation and maintenance of counting equipment, quality assurance provisions of the program, WBC data processing functions, statistical aspects of in vivo measurements, and whole body counting records and associated guidance documents. 16 refs., 48 figs., 22 tabs
Maximum stellar iron core mass
Indian Academy of Sciences (India)
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
International Nuclear Information System (INIS)
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
Neutron spectra unfolding with maximum entropy and maximum likelihood
International Nuclear Information System (INIS)
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
Influence of the statistical distribution of bioassay measurement errors on the intake estimation
International Nuclear Information System (INIS)
Lee, T. Y; Kim, J. K
2006-01-01
The purpose of this study is to provide the guidance necessary for making a selection of error distributions by analyzing influence of statistical distribution for a type of bioassay measurement error on the intake estimation. For this purpose, intakes were estimated using maximum likelihood method for cases that error distributions are normal and lognormal, and comparisons between two distributions for the estimated intakes were made. According to the results of this study, in case that measurement results for lung retention are somewhat greater than the limit of detection it appeared that distribution types have negligible influence on the results. Whereas in case of measurement results for the daily excretion rate, the results obtained from assumption of a lognormal distribution were 10% higher than those obtained from assumption of a normal distribution. In view of these facts, in case where uncertainty component is governed by counting statistics it is considered that distribution type have no influence on intake estimation. Whereas in case where the others are predominant, it is concluded that it is clearly desirable to estimate the intake assuming a lognormal distribution
A New Method for Calculating Counts in Cells
Szapudi, István
1998-04-01
In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.
Temporal trends in sperm count
DEFF Research Database (Denmark)
Levine, Hagai; Jørgensen, Niels; Martino-Andrade, Anderson
2017-01-01
a predefined protocol 7518 abstracts were screened and 2510 full articles reporting primary data on SC were reviewed. A total of 244 estimates of SC and TSC from 185 studies of 42 935 men who provided semen samples in 1973-2011 were extracted for meta-regression analysis, as well as information on years.......006, respectively). WIDER IMPLICATIONS: This comprehensive meta-regression analysis reports a significant decline in sperm counts (as measured by SC and TSC) between 1973 and 2011, driven by a 50-60% decline among men unselected by fertility from North America, Europe, Australia and New Zealand. Because......BACKGROUND: Reported declines in sperm counts remain controversial today and recent trends are unknown. A definitive meta-analysis is critical given the predictive value of sperm count for fertility, morbidity and mortality. OBJECTIVE AND RATIONALE: To provide a systematic review and meta-regression...
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.
Directory of Open Access Journals (Sweden)
Kovin S Naidoo
2012-01-01
Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.
Teranishi, Nobukazu; Theuwissen, Albert; Stoppa, David; Charbon, Edoardo
2017-01-01
The field of photon-counting image sensors is advancing rapidly with the development of various solid-state image sensor technologies including single photon avalanche detectors (SPADs) and deep-sub-electron read noise CMOS image sensor pixels. This foundational platform technology will enable opportunities for new imaging modalities and instrumentation for science and industry, as well as new consumer applications. Papers discussing various photon-counting image sensor technologies and selected new applications are presented in this all-invited Special Issue.
Preventing Errors in Laterality
Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie
2014-01-01
An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...
International Nuclear Information System (INIS)
Reason, J.
1988-01-01
This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated
Spent fuel bundle counter sequence error manual - BRUCE NGS
International Nuclear Information System (INIS)
Nicholson, L.E.
1992-01-01
The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message typically contains adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously
Spent fuel bundle counter sequence error manual - DARLINGTON NGS
International Nuclear Information System (INIS)
Nicholson, L.E.
1992-01-01
The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message typically contains adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously
Reduction of weighing errors caused by tritium decay heating
International Nuclear Information System (INIS)
Shaw, J.F.
1978-01-01
The deuterium-tritium source gas mixture for laser targets is formulated by weight. Experiments show that the maximum weighing error caused by tritium decay heating is 0.2% for a 104-cm 3 mix vessel. Air cooling the vessel reduces the weighing error by 90%
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...
2012-03-01
This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...
International Nuclear Information System (INIS)
Jeach, J.L.
1976-01-01
When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables
Spotting software errors sooner
International Nuclear Information System (INIS)
Munro, D.
1989-01-01
Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)
International Nuclear Information System (INIS)
Kop, L.
2001-01-01
On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses
Medical Errors Reduction Initiative
National Research Council Canada - National Science Library
Mutter, Michael L
2005-01-01
The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...
Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris
2014-07-01
Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
Directory of Open Access Journals (Sweden)
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
DEFF Research Database (Denmark)
Rasmussen, Jens
1983-01-01
An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....
LCLS Maximum Credible Beam Power
International Nuclear Information System (INIS)
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
Liquid scintillation, counting, and compositions
International Nuclear Information System (INIS)
Sena, E.A.; Tolbert, B.M.; Sutula, C.L.
1975-01-01
The emissions of radioactive isotopes in both aqueous and organic samples can be measured by liquid scintillation counting in micellar systems. The micellar systems are made up of scintillation solvent, scintillation solute and a mixture of surfactants, preferably at least one of which is relatively oil-soluble water-insoluble and another which is relatively water-soluble oil-insoluble
Phase space quark counting rule
International Nuclear Information System (INIS)
Wei-gin, C.; Lo, S.
1980-01-01
A simple quark counting rule based on phase space consideration suggested before is used to fit all 39 recent experimental data points on inclusive reactions. Parameter free relations are found to agree with experiments. Excellent detail fits are obtained for 11 inclusive reactions
Counting a Culture of Mealworms
Ashbrook, Peggy
2007-01-01
Math is not the only topic that will be discussed when young children are asked to care for and count "mealworms," a type of insect larvae (just as caterpillars are the babies of butterflies, these larvae are babies of beetles). The following activity can take place over two months as the beetles undergo metamorphosis from larvae to adults. As the…
Counting problems for number rings
Brakenhoff, Johannes Franciscus
2009-01-01
In this thesis we look at three counting problems connected to orders in number fields. First we study the probability that for a random polynomial f in Z[X] the ring Z[X]/f is the maximal order in Q[X]/f. Connected to this is the probability that a random polynomial has a squarefree
On Counting the Rational Numbers
Almada, Carlos
2010-01-01
In this study, we show how to construct a function from the set N of natural numbers that explicitly counts the set Q[superscript +] of all positive rational numbers using a very intuitive approach. The function has the appeal of Cantor's function and it has the advantage that any high school student can understand the main idea at a glance…
Vote Counting as Mathematical Proof
DEFF Research Database (Denmark)
Schürmann, Carsten; Pattinson, Dirk
2015-01-01
then consists of a sequence (or tree) of rule applications and provides an independently checkable certificate of the validity of the result. This reduces the need to trust, or otherwise verify, the correctness of the vote counting software once the certificate has been validated. Using a rule...
Harman, Nate
2016-01-01
We consider the following counting problem related to the card game SET: How many $k$-element SET-free sets are there in an $n$-dimensional SET deck? Through a series of algebraic reformulations and reinterpretations, we show the answer to this question satisfies two polynomiality conditions.
Koop, G.; Dik, N.; Nielen, M.; Lipman, L.J.A.
2010-01-01
The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms, 3 bulk milk samples were collected at intervals of 2 wk. The samples were cultured for SPC, coliform count, and staphylococcal count and for the presence of Staphylococcus aureus. Furthermore, SCC ...
Zhang, Guoqing; Lina, Liu
2018-02-01
An ultra-fast photon counting method is proposed based on the charge integration of output electrical pulses of passive quenching silicon photomultipliers (SiPMs). The results of the numerical analysis with actual parameters of SiPMs show that the maximum photon counting rate of a state-of-art passive quenching SiPM can reach ~THz levels which is much larger than that of the existing photon counting devices. The experimental procedure is proposed based on this method. This photon counting regime of SiPMs is promising in many fields such as large dynamic light power detection.
2008-01-01
One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177
Thermodynamics of Error Correction
Directory of Open Access Journals (Sweden)
Pablo Sartori
2015-12-01
Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Error exponents for entanglement concentration
International Nuclear Information System (INIS)
Hayashi, Masahito; Koashi, Masato; Matsumoto, Keiji; Morikoshi, Fumiaki; Winter, Andreas
2003-01-01
Consider entanglement concentration schemes that convert n identical copies of a pure state into a maximally entangled state of a desired size with success probability being close to one in the asymptotic limit. We give the distillable entanglement, the number of Bell pairs distilled per copy, as a function of an error exponent, which represents the rate of decrease in failure probability as n tends to infinity. The formula fills the gap between the least upper bound of distillable entanglement in probabilistic concentration, which is the well-known entropy of entanglement, and the maximum attained in deterministic concentration. The method of types in information theory enables the detailed analysis of the distillable entanglement in terms of the error rate. In addition to the probabilistic argument, we consider another type of entanglement concentration scheme, where the initial state is deterministically transformed into a (possibly mixed) final state whose fidelity to a maximally entangled state of a desired size converges to one in the asymptotic limit. We show that the same formula as in the probabilistic argument is valid for the argument on fidelity by replacing the success probability with the fidelity. Furthermore, we also discuss entanglement yield when optimal success probability or optimal fidelity converges to zero in the asymptotic limit (strong converse), and give the explicit formulae for those cases
On the mean squared error of the ridge estimator of the covariance and precision matrix
van Wieringen, Wessel N.
2017-01-01
For a suitably chosen ridge penalty parameter, the ridge regression estimator uniformly dominates the maximum likelihood regression estimator in terms of the mean squared error. Analogous results for the ridge maximum likelihood estimators of covariance and precision matrix are presented.
Error prevention at a radon measurement service laboratory
International Nuclear Information System (INIS)
Cohen, B.L.; Cohen, F.
1989-01-01
This article describes the steps taken at a high volume counting laboratory to avoid human, instrument, and computer errors. The laboratory analyzes diffusion barrier charcoal adsorption canisters which have been used to test homes and commercial buildings. A series of computer and human cross-checks are utilized to assure that accurate results are reported to the correct client
Maximum likelihood versus likelihood-free quantum system identification in the atom maser
International Nuclear Information System (INIS)
Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin
2014-01-01
We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle. (paper)
Generic maximum likely scale selection
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
Compact disk error measurements
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Determing and monitoring of maximum permissible power for HWRR-3
International Nuclear Information System (INIS)
Jia Zhanli; Xiao Shigang; Jin Huajin; Lu Changshen
1987-01-01
The operating power of a reactor is an important parameter to be monitored. This report briefly describes the determining and monitoring of maximum permissiable power for HWRR-3. The calculating method is described, and the result of calculation and analysis of error are also given. On-line calculation and real time monitoring have been realized at the heavy water reactor. It provides the reactor with a real time and reliable supervision. This makes operation convenient and increases reliability
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
Theory of photoelectron counting statistics
International Nuclear Information System (INIS)
Blake, J.
1980-01-01
The purpose of the present essay is to provide a detailed analysis of those theoretical aspects of photoelectron counting which are capable of experimental verification. Most of our interest is in the physical phenomena themselves, while part is in the mathematical techniques. Many of the mathematical methods used in the analysis of the photoelectron counting problem are generally unfamiliar to physicists interested in the subject. For this reason we have developed the essay in such a fashion that, although primary interest is focused on the physical phenomena, we have also taken pains to carry out enough of the analysis so that the reader can follow the main details. We have chosen to present a consistently quantum mechanical version of the subject, in that we follow the Glauber theory throughout. (orig./WL)
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.
Analysis of error-correction constraints in an optical disk
Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David
1996-07-01
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Card counting in continuous time
Andersson, Patrik
2012-01-01
We consider the problem of finding an optimal betting strategy for a house-banked casino card game that is played for several coups before reshuffling. The sampling without replacement makes it possible to take advantage of the changes in the expected value as the deck is depleted, making large bets when the game is advantageous. Using such a strategy, which is easy to implement, is known as card counting. We consider the case of a large number of decks, making an approximat...
Maximum Entropy and Theory Construction: A Reply to Favretti
Directory of Open Access Journals (Sweden)
John Harte
2018-04-01
Full Text Available In the maximum entropy theory of ecology (METE, the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy model exists that assumes the same prior knowledge and makes predictions that differ from METE’s. He shows that both cannot be correct and asserts that his is the correct one because it can be derived from a classic microstate-counting calculation. I clarify here exactly what the core entities and definitions are for METE, and discuss the relevance of two critical issues raised by Favretti: the existence of a counting procedure for microstates and the choices of definition of the core elements of a theory. I emphasize that a theorist controls how the core entities of his or her theory are defined, and that nature is the final arbiter of the validity of a theory.
Directory of Open Access Journals (Sweden)
Antonio Boldrini
2013-06-01
Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research
LIBERTARISMO & ERROR CATEGORIAL
Directory of Open Access Journals (Sweden)
Carlos G. Patarroyo G.
2009-01-01
Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.
Libertarismo & Error Categorial
PATARROYO G, CARLOS G
2009-01-01
En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...
1985-01-01
A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.
Conically scanning lidar error in complex terrain
Directory of Open Access Journals (Sweden)
Ferhat Bingöl
2009-05-01
Full Text Available Conically scanning lidars assume the flow to be homogeneous in order to deduce the horizontal wind speed. However, in mountainous or complex terrain this assumption is not valid implying a risk that the lidar will derive an erroneous wind speed. The magnitude of this error is measured by collocating a meteorological mast and a lidar at two Greek sites, one hilly and one mountainous. The maximum error for the sites investigated is of the order of 10 %. In order to predict the error for various wind directions the flows at both sites are simulated with the linearized flow model, WAsP Engineering 2.0. The measurement data are compared with the model predictions with good results for the hilly site, but with less success at the mountainous site. This is a deficiency of the flow model, but the methods presented in this paper can be used with any flow model.
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Energy Technology Data Exchange (ETDEWEB)
Gopich, Irina V. [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892 (United States)
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.
International Nuclear Information System (INIS)
Aiginger, H.; Lauer, D.; Unfried, E.; Koenig, F.; Ogris, E.
1998-01-01
The assets and shortcomings of a well-type NaI detector and a Ge detector were examined using the Marinelli geometry, test tube geometry, and beaker geometry. Plots of the efficiency vs. energy (efficiency calibration), recorded time vs. true time (dead-time effects), and maximum activity vs. energy are reproduced. A high counting efficiency is typical of the scintillation detector in the well for the test tube geometry, particularly in the low energy range. For energies higher than 100 keV, the counting efficiency decreases because of the increasing penetration of the detector bulk by high-energy photons. For the germanium detector, the highest counting efficiency was achieved in the beaker geometry. A linear relationship exists between the calculated and measured counts at the beginning of the recorded curve for both systems. For the well-type detector the maximum detectable count rate was about 30 kcps, the linearity of the plot of the true count rate was guaranteed up to 10 kcps. Dead time correction was to be made at higher count rates. For the germanium detector the maximum detectable count rate was only about 8 kcps due to the longer dead time, the linear segment, however, was longer than for the scintillation detector. It is concluded that although the maximum detectable count rate of the germanium detector is lower, higher true activities can be detected with it owing to the lower detection efficiency. The well-type scintillation detector is advantageous for the test tube geometry. (P.A.)
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
Correction of the counting up number by dead time in detector systems for radiograph images
International Nuclear Information System (INIS)
Cerdeira E, A.; Cicuttin, A.; Cerdeira, A.; Estrada, M.; Luca, A. de
2002-01-01
The effect of the dead time in a detection system by counting up of particles and the contribution of this error in the final image resolution is analysed. It is given a statistical criteria for the optimization of electronic parameters such as dead time and counting up memory which help in the implementation of these systems with the minimum necessary characteristics which satisfy the resolution requirements. (Author)
Indian Academy of Sciences (India)
Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...
Influence of Ephemeris Error on GPS Single Point Positioning Accuracy
Lihua, Ma; Wang, Meng
2013-09-01
The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.
Effects of lek count protocols on greater sage-grouse population trend estimates
Monroe, Adrian; Edmunds, David; Aldridge, Cameron L.
2016-01-01
Annual counts of males displaying at lek sites are an important tool for monitoring greater sage-grouse populations (Centrocercus urophasianus), but seasonal and diurnal variation in lek attendance may increase variance and bias of trend analyses. Recommendations for protocols to reduce observation error have called for restricting lek counts to within 30 minutes of sunrise, but this may limit the number of lek counts available for analysis, particularly from years before monitoring was widely standardized. Reducing the temporal window for conducting lek counts also may constrain the ability of agencies to monitor leks efficiently. We used lek count data collected across Wyoming during 1995−2014 to investigate the effect of lek counts conducted between 30 minutes before and 30, 60, or 90 minutes after sunrise on population trend estimates. We also evaluated trends across scales relevant to management, including statewide, within Working Group Areas and Core Areas, and for individual leks. To further evaluate accuracy and precision of trend estimates from lek count protocols, we used simulations based on a lek attendance model and compared simulated and estimated values of annual rate of change in population size (λ) from scenarios of varying numbers of leks, lek count timing, and count frequency (counts/lek/year). We found that restricting analyses to counts conducted within 30 minutes of sunrise generally did not improve precision of population trend estimates, although differences among timings increased as the number of leks and count frequency decreased. Lek attendance declined >30 minutes after sunrise, but simulations indicated that including lek counts conducted up to 90 minutes after sunrise can increase the number of leks monitored compared to trend estimates based on counts conducted within 30 minutes of sunrise. This increase in leks monitored resulted in greater precision of estimates without reducing accuracy. Increasing count
Challenge and Error: Critical Events and Attention-Related Errors
Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel
2011-01-01
Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…
Photon counting arrays for AO wavefront sensors
Vallerga, J; McPhate, J; Mikulec, Bettina; Clark, Allan G; Siegmund, O; CERN. Geneva
2005-01-01
Future wavefront sensors for AO on large telescopes will require a large number of pixels and must operate at high frame rates. Unfortunately for CCDs, there is a readout noise penalty for operating faster, and this noise can add up rather quickly when considering the number of pixels required for the extended shape of a sodium laser guide star observed with a large telescope. Imaging photon counting detectors have zero readout noise and many pixels, but have suffered in the past with low QE at the longer wavelengths (>500 nm). Recent developments in GaAs photocathode technology, CMOS ASIC readouts and FPGA processing electronics have resulted in noiseless WFS detector designs that are competitive with silicon array detectors, though at ~40% the QE of CCDs. We review noiseless array detectors and compare their centroiding performance with CCDs using the best available characteristics of each. We show that for sub-aperture binning of 6x6 and greater that noiseless detectors have a smaller centroid error at flu...
Team errors: definition and taxonomy
International Nuclear Information System (INIS)
Sasou, Kunihide; Reason, James
1999-01-01
In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors
Koop, G.; Dik, N.; Nielen, M.; Lipman, L.J.A.
2010-01-01
The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms,
Data analysis in emission tomography using emission-count posteriors
International Nuclear Information System (INIS)
Sitek, Arkadiusz
2012-01-01
A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography. (paper)
Data analysis in emission tomography using emission-count posteriors
Sitek, Arkadiusz
2012-11-01
A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.
Automatic counting of microglial cell activation and its applications
Directory of Open Access Journals (Sweden)
Beatriz I Gallego
2016-01-01
Full Text Available Glaucoma is a multifactorial optic neuropathy characterized by the damage and death of the retinal ganglion cells. This disease results in vision loss and blindness. Any vision loss resulting from the disease cannot be restored and nowadays there is no available cure for glaucoma; however an early detection and treatment, could offer neuronal protection and avoid later serious damages to the visual function. A full understanding of the etiology of the disease will still require the contribution of many scientific efforts. Glial activation has been observed in glaucoma, being microglial proliferation a hallmark in this neurodegenerative disease. A typical project studying these cellular changes involved in glaucoma often needs thousands of images - from several animals - covering different layers and regions of the retina. The gold standard to evaluate them is the manual count. This method requires a large amount of time from specialized personnel. It is a tedious process and prone to human error. We present here a new method to count microglial cells by using a computer algorithm. It counts in one hour the same number of images that a researcher counts in four weeks, with no loss of reliability.
Nearest neighbors by neighborhood counting.
Wang, Hui
2006-06-01
Finding nearest neighbors is a general idea that underlies many artificial intelligence tasks, including machine learning, data mining, natural language understanding, and information retrieval. This idea is explicitly used in the k-nearest neighbors algorithm (kNN), a popular classification method. In this paper, this idea is adopted in the development of a general methodology, neighborhood counting, for devising similarity functions. We turn our focus from neighbors to neighborhoods, a region in the data space covering the data point in question. To measure the similarity between two data points, we consider all neighborhoods that cover both data points. We propose to use the number of such neighborhoods as a measure of similarity. Neighborhood can be defined for different types of data in different ways. Here, we consider one definition of neighborhood for multivariate data and derive a formula for such similarity, called neighborhood counting measure or NCM. NCM was tested experimentally in the framework of kNN. Experiments show that NCM is generally comparable to VDM and its variants, the state-of-the-art distance functions for multivariate data, and, at the same time, is consistently better for relatively large k values. Additionally, NCM consistently outperforms HEOM (a mixture of Euclidean and Hamming distances), the "standard" and most widely used distance function for multivariate data. NCM has a computational complexity in the same order as the standard Euclidean distance function and NCM is task independent and works for numerical and categorical data in a conceptually uniform way. The neighborhood counting methodology is proven sound for multivariate data experimentally. We hope it will work for other types of data.
International Nuclear Information System (INIS)
Forbes, G.B.; Lantigua, R.; Amatruda, J.M.; Lockwood, D.H.
1981-01-01
Six overweight adult subjects given a low calorie diet containing adequate amounts of nitrogen but subnormal amounts of potassium (K) were observed on the Clinical Research Center for periods of 29 to 40 days. Metabolic balance of potassium was measured together with frequent assays of total body K by 40 K counting. Metabolic K balance underestimated body K losses by 11 to 87% (average 43%): the intersubject variability is such as to preclude the use of a single correction value for unmeasured losses in K balance studies
Standardization of Ga-68 by coincidence measurements, liquid scintillation counting and 4πγ counting
International Nuclear Information System (INIS)
Roteta, Miguel; Peyres, Virginia; Rodríguez Barquero, Leonor; García-Toraño, Eduardo; Arenillas, Pablo; Balpardo, Christian; Rodrígues, Darío; Llovera, Roberto
2012-01-01
The radionuclide 68 Ga is one of the few positron emitters that can be prepared in-house without the use of a cyclotron. It disintegrates to the ground state of 68 Zn partially by positron emission (89.1%) with a maximum energy of 1899.1 keV, and partially by electron capture (10.9%). This nuclide has been standardized in the frame of a cooperation project between the Radionuclide Metrology laboratories from CIEMAT (Spain) and CNEA (Argentina). Measurements involved several techniques: 4πβ−γ coincidences, integral gamma counting and Liquid Scintillation Counting using the triple to double coincidence ratio and the CIEMAT/NIST methods. Given the short half-life of the radionuclide assayed, a direct comparison between results from both laboratories was excluded and a comparison of experimental efficiencies of similar NaI detectors was used instead. - Highlights: ► We standardized the positron emitter Ga-68 in a bilateral cooperation. ► We used several techniques, as coincidence, integral gamma and liquid scintillation. ► An efficiency comparison replaced a direct comparison of reference materials.
CalCOFI Egg Counts Positive Tows
National Oceanic and Atmospheric Administration, Department of Commerce — Fish egg counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets], and...
CalCOFI Larvae Counts Positive Tows
National Oceanic and Atmospheric Administration, Department of Commerce — Fish larvae counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets],...
Alaska Steller Sea Lion Pup Count Database
National Oceanic and Atmospheric Administration, Department of Commerce — This database contains counts of Steller sea lion pups on rookeries in Alaska made between 1961 and 2015. Pup counts are conducted in late June-July. Pups are...
International Nuclear Information System (INIS)
Pan Yi; Mao Wanchong
2010-01-01
The parameter measurement of nuclear track occupies an important position in the field of nuclear technology. However, traditional artificial counting method has many limitations. In recent years, DSP and digital image processing technology have been applied in nuclear field more and more. For the sake of reducing errors of visual measurement in artificial counting method, an automatic counting system for nuclear track based on DM642 real-time image processing platform is introduced in this article, which is able to effectively remove interferences from the background and noise points, as well as automatically extract nuclear track-points by using mathematical morphology algorithm. (authors)
Maximum entropy and Bayesian methods
International Nuclear Information System (INIS)
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
Pharmaceutical Pill Counting and Inspection Using a Capacitive Sensor
Directory of Open Access Journals (Sweden)
Ganesan LETCHUMANAN
2008-01-01
Full Text Available A capacitive sensor for high-speed counting and inspection of pharmaceutical products is proposed and evaluated. The sensor is based on a patented Electrostatic Field Sensor (EFS device, previously developed by Sparc Systems Limited. However, the sensor head proposed in this work has a significantly different geometry and has been designed with a rectangular inspection aperture of 160mm × 21mm, which best meets applications where a larger count throughput is required with a single sensor. Finite element modelling has been used to simulate the electrostatic fields generated within the sensor, and as a design tool for optimising the sensor head configuration. The actual and simulated performance of the sensor is compared and analysed in terms of the sensor performance at discriminating between damaged products or detection of miscount errors.
Rieger, Martina; Martinez, Fanny; Wenke, Dorit
2011-01-01
Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…
Quantitative Compton suppression spectrometry at elevated counting rates
International Nuclear Information System (INIS)
Westphal, G.P.; Joestl, K.; Schroeder, P.; Lauster, R.; Hausch, E.
1999-01-01
For quantitative Compton suppression spectrometry the decrease of coincidence efficiency with counting rate should be made negligible to avoid a virtual increase of relative peak areas of coincident isomeric transitions with counting rate. To that aim, a separate amplifier and discriminator has been used for each of the eight segments of the active shield of a new well-type Compton suppression spectrometer, together with an optimized, minimum dead-time design of the anticoincidence logic circuitry. Chance coincidence losses in the Compton suppression spectrometer are corrected instrumentally by comparing the chance coincidence rate to the counting rate of the germanium detector in a pulse-counting Busy circuit (G.P. Westphal, J. Rad. Chem. 179 (1994) 55) which is combined with the spectrometer's LFC counting loss correction system. The normally not observable chance coincidence rate is reconstructed from the rates of germanium detector and scintillation detector in an auxiliary coincidence unit, after the destruction of true coincidence by delaying one of the coincidence partners. Quantitative system response has been tested in two-source measurements with a fixed reference source of 60 Co of 14 kc/s, and various samples of 137 Cs, up to aggregate counting rates of 180 kc/s for the well-type detector, and more than 1400 kc/s for the BGO shield. In these measurements, the net peak areas of the 1173.3 keV line of 60 Co remained constant at typical values of 37 000 with and 95 000 without Compton suppression, with maximum deviations from the average of less than 1.5%
Correction of refractive errors
Directory of Open Access Journals (Sweden)
Vladimir Pfeifer
2005-10-01
Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Satellite Photometric Error Determination
2015-10-18
Satellite Photometric Error Determination Tamara E. Payne, Philip J. Castro, Stephen A. Gregory Applied Optimization 714 East Monument Ave, Suite...advocate the adoption of new techniques based on in-frame photometric calibrations enabled by newly available all-sky star catalogs that contain highly...filter systems will likely be supplanted by the Sloan based filter systems. The Johnson photometric system is a set of filters in the optical
SPERM COUNT DISTRIBUTIONS IN FERTILE MEN
Sperm concentration and count are often used as indicators of environmental impacts on male reproductive health. Existing clinical databases may be biased towards subfertile men with low sperm counts and less is known about expected sperm count distributions in cohorts of fertil...
Platelet counting using the Coulter electronic counter.
Eggleton, M J; Sharp, A A
1963-03-01
A method for counting platelets in dilutions of platelet-rich plasm using the Coulter electronic counter is described.(1) The results obtained show that such platelet counts are at least as accurate as the best methods of visual counting. The various technical difficulties encountered are discussed.
Count-doubling time safety circuit
International Nuclear Information System (INIS)
Keefe, D.J.; McDowell, W.P.; Rusch, G.K.
1981-01-01
There is provided a nuclear reactor count-factor-increase time monitoring circuit which includes a pulse-type neutron detector, and means for counting the number of detected pulses during specific time periods. Counts are compared and the comparison is utilized to develop a reactor scram signal, if necessary
Count-doubling time safety circuit
Rusch, Gordon K.; Keefe, Donald J.; McDowell, William P.
1981-01-01
There is provided a nuclear reactor count-factor-increase time monitoring circuit which includes a pulse-type neutron detector, and means for counting the number of detected pulses during specific time periods. Counts are compared and the comparison is utilized to develop a reactor scram signal, if necessary.
DC KIDS COUNT e-Databook Indicators
DC Action for Children, 2012
2012-01-01
This report presents indicators that are included in DC Action for Children's 2012 KIDS COUNT e-databook, their definitions and sources and the rationale for their selection. The indicators for DC KIDS COUNT represent a mix of traditional KIDS COUNT indicators of child well-being, such as the number of children living in poverty, and indicators of…
Video Error Correction Using Steganography
Robie, David L.; Mersereau, Russell M.
2002-12-01
The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Video Error Correction Using Steganography
Directory of Open Access Journals (Sweden)
Robie David L
2002-01-01
Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Maximum entropy principal for transportation
International Nuclear Information System (INIS)
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
How much do women count if they not counted?
Directory of Open Access Journals (Sweden)
Federica Taddia
2006-01-01
Full Text Available The condition of women throughout the world is marked by countless injustices and violations of the most fundamental rights established by the Universal Declaration of human rights and every culture is potentially prone to commit discrimination against women in various forms. Women are worse fed, more exposed to physical violence, more exposed to diseases and less educated; they have less access to, or are excluded from, vocational training paths; they are the most vulnerable among prisoners of conscience, refugees and immigrants and the least considered within ethnic minorities; from their very childhood, women are humiliated, undernourished, sold, raped and killed; their work is generally less paid compared to men’s work and in some countries they are victims of forced marriages. Such condition is the result of old traditions that implicit gender-differentiated education has long promoted through cultural models based on theories, practices and policies marked by discrimination and structured differentially for men and women. Within these cultural models, the basic educational institutions have played and still play a major role in perpetuating such traditions. Nevertheless, if we want to overcome inequalities and provide women with empowerment, we have to start right from the educational institutions and in particular from school, through the adoption of an intercultural approach to education: an approach based on active pedagogy and on methods of analysis, exchange and enhancement typical of socio-educational animation. The intercultural approach to education is attentive to promote the realisation of each individual and the dignity and right of everyone to express himself/herself in his/her own way. Such an approach will give women the opportunity to become actual agents of collective change and to get the strength and wellbeing necessary to count and be counted as human beings entitled to freedom and equality, and to have access to all
Fractional counts-the simulation of low probability events
International Nuclear Information System (INIS)
Coldwell, R.L.; Lasche, G.P.; Jadczyk, A.
2001-01-01
The code RobSim has been added to RobWin.1 It simulates spectra resulting from gamma rays striking an array of detectors made up of different components. These are frequently used to set coincidence and anti-coincidence windows that decide if individual events are part of the signal. The first problem addressed is the construction of the detector. Then owing to the statistical nature of the responses of these elements there is a random nature in the response that can be taken into account by including fractional counts in the output spectrum. This somewhat complicates the error analysis, as Poisson statistics are no longer applicable
Project for an analogue divider using electronic counting
International Nuclear Information System (INIS)
Novat, J.
1964-01-01
The apparatus which has been developed is designed to give the reciprocal of a number between 10 3 and 10 7 . In practice this number can be the pulse count provided during a given time by a detector of the BF 3 type during a criticality experiment. The apparatus is made up of two parts: one provides, by means of relays, a voltage proportional to the reciprocal required, the other is a numeric voltmeter measuring this voltage between 0.1 and 1 volt. The relative error of the result is under 5 per cent. (author) [fr
Development of low level alpha particle counting system
International Nuclear Information System (INIS)
Minobe, Masao; Kondo, Hiraku; Chinuki, Takashi; Hirano, Hiromichi
1987-01-01
Much attention has been paid to the trace analysis of uranium and thorium contained in the base material of LSI or VLSI, since the so-called ''soft-error'' of the memory device was known to be due to alpha particles emitted from these radioactive elements. We have developed an apparatus to meet the needs of estimating such a very small quantity of U and Th of the level of ppb, by directly counting alpha particles using a gas-flow type proportional counter. This method requires no sophisticated analytical skill, and the accuracy of the result is satisfactory. The instrumentation and some application of this apparatus are described. (author)
Error-related brain activity and error awareness in an error classification paradigm.
Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E
2016-10-01
Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.
Relationship of milking rate to somatic cell count.
Brown, C A; Rischette, S J; Schultz, L H
1986-03-01
Information on milking rate, monthly bucket somatic cell counts, mastitis treatment, and milk production was obtained from 284 lactations of Holstein cows separated into three lactation groups. Significant correlations between somatic cell count (linear score) and other parameters included production in lactation 1 (-.185), production in lactation 2 (-.267), and percent 2-min milk in lactation 2 (.251). Somatic cell count tended to increase with maximum milking rate in all lactations, but correlations were not statistically significant. Twenty-nine percent of cows with milking rate measurements were treated for clinical mastitis. Treated cows in each lactation group produced less milk than untreated cows. In the second and third lactation groups, treated cows had a shorter total milking time and a higher percent 2-min milk than untreated cows, but differences were not statistically significant. Overall, the data support the concept that faster milking cows tend to have higher cell counts and more mastitis treatments, particularly beyond first lactation. However, the magnitude of the relationship was small.
A new method of quench monitoring in liquid scintillation counting
International Nuclear Information System (INIS)
Horrocks, D.L.
1978-01-01
The quench level of different liquid scintillation counting samples is measured by comparing the responses (pulse heights) produced by the same energy electrons in each sample. The electrons utilized in the measurements are those of the maximum energy (Esub(max)) which are produced by the single Compton scattering process for the same energy gamma-rays in each sample. The Esub(max) response produced in any sample is related to the Esub(max) response produced in an unquenched, sealed standard. The difference in response on a logarithm response scale is defined as the ''H Number''. The H number is related to the counting efficiency of the desired radionuclide by measurement of a set of standards of known amounts of the radionuclide and different amounts of quench (standard quench curve). The concept of the H number has been shown to be theoretically valid. Based upon this proof, the features of the H number concept as embodied in the Beckman LS-8000 Series Liquid Scintillation Systems have been demonstrated. It has been shown that one H number is unique; it provides a method of instrument calibration and wide dynamic quench range measurements. Further, it has been demonstrated that the H number concept provides a universal quench parameter. Counting efficiency vs. H number plots are repeatable within the statistical limits of +-1% counting efficiency. By the use of the H number concept a very accurate method of automatic quench compensation (A.Q.C.) is possible. (T.G.)
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
Leiva, Josue Nahun; Robbins, James; Saraswat, Dharmendra; She, Ying; Ehsani, Reza
2017-07-01
This study evaluated the effect of flight altitude and canopy separation of container-grown Fire Chief™ arborvitae (Thuja occidentalis L.) on counting accuracy. Images were taken at 6, 12, and 22 m above the ground using unmanned aircraft systems. Plants were spaced to achieve three canopy separation treatments: 5 cm between canopy edges, canopy edges touching, and 5 cm of canopy edge overlap. Plants were placed on two different ground covers: black fabric and gravel. A counting algorithm was trained using Feature Analyst®. Total counting error, false positives, and unidentified plants were reported for images analyzed. In general, total counting error was smaller when plants were fully separated. The effect of ground cover on counting accuracy varied with the counting algorithm. Total counting error for plants placed on gravel (-8) was larger than for those on a black fabric (-2), however, false positive counts were similar for black fabric (6) and gravel (6). Nevertheless, output images of plants placed on gravel did not show a negative effect due to the ground cover but was impacted by differences in image spatial resolution.
On the fast response of charnel electron multipliers in coUnting mode operation
International Nuclear Information System (INIS)
Belyaevskij, O.A.; Gladyshev, I.L.; Korobochko, Yu.S.; Mineev, V.I.
1983-01-01
Dependences of amplitude distribution of pulses at the outlet of channel electron multipliers (CEM) and effectiveness of monitoring on counting rate at different supply voltages are determined. It is shown that the maximUm counting rate of CEM runs into 6x10 5 s -1 at short-term and 10 5 s -1 at long-term operation using monitoring eqUipment with operation threshold of 2.5 mV
Radon counting statistics - a Monte Carlo investigation
International Nuclear Information System (INIS)
Scott, A.G.
1996-01-01
Radioactive decay is a Poisson process, and so the Coefficient of Variation (COV) of open-quotes nclose quotes counts of a single nuclide is usually estimated as 1/√n. This is only true if the count duration is much shorter than the half-life of the nuclide. At longer count durations, the COV is smaller than the Poisson estimate. Most radon measurement methods count the alpha decays of 222 Rn, plus the progeny 218 Po and 214 Po, and estimate the 222 Rn activity from the sum of the counts. At long count durations, the chain decay of these nuclides means that every 222 Rn decay must be followed by two other alpha decays. The total number of decays is open-quotes 3Nclose quotes, where N is the number of radon decays, and the true COV of the radon concentration estimate is 1/√(N), √3 larger than the Poisson total count estimate of 1/√3N. Most count periods are comparable to the half lives of the progeny, so the relationship between COV and count time is complex. A Monte-Carlo estimate of the ratio of true COV to Poisson estimate was carried out for a range of count periods from 1 min to 16 h and three common radon measurement methods: liquid scintillation, scintillation cell, and electrostatic precipitation of progeny. The Poisson approximation underestimates COV by less than 20% for count durations of less than 60 min
International Nuclear Information System (INIS)
Potter, W.E.
2005-01-01
The exact probability density function for paired counting can be expressed in terms of modified Bessel functions of integral order when the expected blank count is known. Exact decision levels and detection limits can be computed in a straightforward manner. For many applications perturbing half-integer corrections to Gaussian distributions yields satisfactory results for decision levels. When there is concern about the uncertainty for the expected value of the blank count, a way to bound the errors of both types using confidence intervals for the expected blank count is discussed. (author)
Discrete calculus methods for counting
Mariconda, Carlo
2016-01-01
This book provides an introduction to combinatorics, finite calculus, formal series, recurrences, and approximations of sums. Readers will find not only coverage of the basic elements of the subjects but also deep insights into a range of less common topics rarely considered within a single book, such as counting with occupancy constraints, a clear distinction between algebraic and analytical properties of formal power series, an introduction to discrete dynamical systems with a thorough description of Sarkovskii’s theorem, symbolic calculus, and a complete description of the Euler-Maclaurin formulas and their applications. Although several books touch on one or more of these aspects, precious few cover all of them. The authors, both pure mathematicians, have attempted to develop methods that will allow the student to formulate a given problem in a precise mathematical framework. The aim is to equip readers with a sound strategy for classifying and solving problems by pursuing a mathematically rigorous yet ...
Counting paths with Schur transitions
Energy Technology Data Exchange (ETDEWEB)
Díaz, Pablo [Department of Physics and Astronomy, University of Lethbridge, Lethbridge, Alberta, T1K 3M4 (Canada); Kemp, Garreth [Department of Physics, University of Johannesburg, P.O. Box 524, Auckland Park 2006 (South Africa); Véliz-Osorio, Alvaro, E-mail: aveliz@gmail.com [Mandelstam Institute for Theoretical Physics, University of the Witwatersrand, WITS 2050, Johannesburg (South Africa); School of Physics and Astronomy, Queen Mary, University of London, Mile End Road, London E1 4NS (United Kingdom)
2016-10-15
In this work we explore the structure of the branching graph of the unitary group using Schur transitions. We find that these transitions suggest a new combinatorial expression for counting paths in the branching graph. This formula, which is valid for any rank of the unitary group, reproduces known asymptotic results. We proceed to establish the general validity of this expression by a formal proof. The form of this equation strongly hints towards a quantum generalization. Thus, we introduce a notion of quantum relative dimension and subject it to the appropriate consistency tests. This new quantity finds its natural environment in the context of RCFTs and fractional statistics; where the already established notion of quantum dimension has proven to be of great physical importance.
Mass counting of radioactivity samples
International Nuclear Information System (INIS)
Oesterlin, D.L.; Obrycki, R.F.
1977-01-01
A method and apparatus for concurrently counting a plurality of radioactive samples is claimed. The position sensitive circuitry of a scintillation camera is employed to sort electrical pulses resulting from scintillations according to the geometrical locations of scintillations causing those pulses. A scintillation means, in the form of a scintillating crystal material or a liquid scintillator, is positioned proximate to an array of radioactive samples. Improvement in the accuracy of pulse classification may be obtained by employing collimating means. If a plurality of scintillation crystals are employed to measure the iodine-125 content of samples, a method and means are provided for correcting for variations in crystal light transmission properties, sample volume, and sample container radiation absorption. 2 claims, 7 drawing figures
Diagnostic errors in pediatric radiology
International Nuclear Information System (INIS)
Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.
2011-01-01
Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)
Sperm count as a surrogate endpoint for male fertility control.
Benda, Norbert; Gerlinger, Christoph
2007-11-30
When assessing the effectiveness of a hormonal method of fertility control in men, the classical approach used for the assessment of hormonal contraceptives in women, by estimating the pregnancy rate or using a life-table analysis for the time to pregnancy, is difficult to apply in a clinical development program. The main reasons are the dissociation of the treated unit, i.e. the man, and the observed unit, i.e. his female partner, the high variability in the frequency of male intercourse, the logistical cost and ethical concerns related to the monitoring of the trial. A reasonable surrogate endpoint of the definite endpoint time to pregnancy is sperm count. In addition to the avoidance of the mentioned problems, trials that compare different treatments are possible with reasonable sample sizes, and study duration can be shorter. However, current products do not suppress sperm production to 100 per cent in all men and the sperm count is only observed with measurement error. Complete azoospermia might not be necessary in order to achieve an acceptable failure rate compared with other forms of male fertility control. Therefore, the use of sperm count as a surrogate endpoint must rely on the results of a previous trial in which both the definitive- and surrogate-endpoint results were assessed. The paper discusses different estimation functions of the mean pregnancy rate (corresponding to the cumulative hazard) that are based on the results of sperm count trial and a previous trial in which both sperm count and time to pregnancy were assessed, as well as the underlying assumptions. Sample size estimations are given for pregnancy rate estimation with a given precision.
Minimum Error Entropy Classification
Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A
2013-01-01
This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.
A pulse stacking method of particle counting applied to position sensitive detection
International Nuclear Information System (INIS)
Basilier, E.
1976-03-01
A position sensitive particle counting system is described. A cyclic readout imaging device serves as an intermediate information buffer. Pulses are allowed to stack in the imager at very high counting rates. Imager noise is completely discriminated to provide very wide dynamic range. The system has been applied to a detector using cascaded microchannel plates. Pulse height spread produced by the plates causes some loss of information. The loss is comparable to the input loss of the plates. The improvement in maximum counting rate is several hundred times over previous systems that do not permit pulse stacking. (Auth.)
Correcting systematic errors in high-sensitivity deuteron polarization measurements
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
Energy Technology Data Exchange (ETDEWEB)
Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Kuhnt, S.
2004-01-01
Observed cell counts in contingency tables are perceived as outliers if they have low probability under an anticipated loglinear Poisson model. New procedures for the identification of such outliers are derived using the classical maximum likelihood estimator and an estimator based on the L1 norm.
Two-dimensional maximum entropy image restoration
International Nuclear Information System (INIS)
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
Combining cluster number counts and galaxy clustering
Energy Technology Data Exchange (ETDEWEB)
Lacasa, Fabien; Rosenfeld, Rogerio, E-mail: fabien@ift.unesp.br, E-mail: rosenfel@ift.unesp.br [ICTP South American Institute for Fundamental Research, Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo (Brazil)
2016-08-01
The abundance of clusters and the clustering of galaxies are two of the important cosmological probes for current and future large scale surveys of galaxies, such as the Dark Energy Survey. In order to combine them one has to account for the fact that they are not independent quantities, since they probe the same density field. It is important to develop a good understanding of their correlation in order to extract parameter constraints. We present a detailed modelling of the joint covariance matrix between cluster number counts and the galaxy angular power spectrum. We employ the framework of the halo model complemented by a Halo Occupation Distribution model (HOD). We demonstrate the importance of accounting for non-Gaussianity to produce accurate covariance predictions. Indeed, we show that the non-Gaussian covariance becomes dominant at small scales, low redshifts or high cluster masses. We discuss in particular the case of the super-sample covariance (SSC), including the effects of galaxy shot-noise, halo second order bias and non-local bias. We demonstrate that the SSC obeys mathematical inequalities and positivity. Using the joint covariance matrix and a Fisher matrix methodology, we examine the prospects of combining these two probes to constrain cosmological and HOD parameters. We find that the combination indeed results in noticeably better constraints, with improvements of order 20% on cosmological parameters compared to the best single probe, and even greater improvement on HOD parameters, with reduction of error bars by a factor 1.4-4.8. This happens in particular because the cross-covariance introduces a synergy between the probes on small scales. We conclude that accounting for non-Gaussian effects is required for the joint analysis of these observables in galaxy surveys.
Institute of Scientific and Technical Information of China (English)
Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh
2017-01-01
The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).
Energy Technology Data Exchange (ETDEWEB)
Papoular, R
1997-07-01
The Fourier Transform is of central importance to Crystallography since it allows the visualization in real space of tridimensional scattering densities pertaining to physical systems from diffraction data (powder or single-crystal diffraction, using x-rays, neutrons, electrons or else). In turn, this visualization makes it possible to model and parametrize these systems, the crystal structures of which are eventually refined by Least-Squares techniques (e.g., the Rietveld method in the case of Powder Diffraction). The Maximum Entropy Method (sometimes called MEM or MaxEnt) is a general imaging technique, related to solving ill-conditioned inverse problems. It is ideally suited for tackling undetermined systems of linear questions (for which the number of variables is much larger than the number of equations). It is already being applied successfully in Astronomy, Radioastronomy and Medical Imaging. The advantages of using MAXIMUM Entropy over conventional Fourier and `difference Fourier` syntheses stem from the following facts: MaxEnt takes the experimental error bars into account; MaxEnt incorporate Prior Knowledge (e.g., the positivity of the scattering density in some instances); MaxEnt allows density reconstructions from incompletely phased data, as well as from overlapping Bragg reflections; MaxEnt substantially reduces truncation errors to which conventional experimental Fourier reconstructions are usually prone. The principles of Maximum Entropy imaging as applied to Crystallography are first presented. The method is then illustrated by a detailed example specific to Neutron Diffraction: the search for proton in solids. (author). 17 refs.
Standard Errors for Matrix Correlations.
Ogasawara, Haruhiko
1999-01-01
Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)
Error forecasting schemes of error correction at receiver
International Nuclear Information System (INIS)
Bhunia, C.T.
2007-08-01
To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)
Atom-counting in High Resolution Electron Microscopy:TEM or STEM - That's the question.
Gonnissen, J; De Backer, A; den Dekker, A J; Sijbers, J; Van Aert, S
2017-03-01
In this work, a recently developed quantitative approach based on the principles of detection theory is used in order to determine the possibilities and limitations of High Resolution Scanning Transmission Electron Microscopy (HR STEM) and HR TEM for atom-counting. So far, HR STEM has been shown to be an appropriate imaging mode to count the number of atoms in a projected atomic column. Recently, it has been demonstrated that HR TEM, when using negative spherical aberration imaging, is suitable for atom-counting as well. The capabilities of both imaging techniques are investigated and compared using the probability of error as a criterion. It is shown that for the same incoming electron dose, HR STEM outperforms HR TEM under common practice standards, i.e. when the decision is based on the probability function of the peak intensities in HR TEM and of the scattering cross-sections in HR STEM. If the atom-counting decision is based on the joint probability function of the image pixel values, the dependence of all image pixel intensities as a function of thickness should be known accurately. Under this assumption, the probability of error may decrease significantly for atom-counting in HR TEM and may, in theory, become lower as compared to HR STEM under the predicted optimal experimental settings. However, the commonly used standard for atom-counting in HR STEM leads to a high performance and has been shown to work in practice. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluating a medical error taxonomy.
Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie
2002-01-01
Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...
Rectangular maximum-volume submatrices and their applications
Mikhalev, Aleksandr; Oseledets, I.V.
2017-01-01
We introduce a definition of the volume of a general rectangular matrix, which is equivalent to an absolute value of the determinant for square matrices. We generalize results of square maximum-volume submatrices to the rectangular case, show a connection of the rectangular volume with an optimal experimental design and provide estimates for a growth of coefficients and an approximation error in spectral and Chebyshev norms. Three promising applications of such submatrices are presented: recommender systems, finding maximal elements in low-rank matrices and preconditioning of overdetermined linear systems. The code is available online.
Approximation for maximum pressure calculation in containment of PWR reactors
International Nuclear Information System (INIS)
Souza, A.L. de
1989-01-01
A correlation was developed to estimate the maximum pressure of dry containment of PWR following a Loss-of-Coolant Accident - LOCA. The expression proposed is a function of the total energy released to the containment by the primary circuit, of the free volume of the containment building and of the total surface are of the heat-conducting structures. The results show good agreement with those present in Final Safety Analysis Report - FSAR of several PWR's plants. The errors are in the order of ± 12%. (author) [pt
Rectangular maximum-volume submatrices and their applications
Mikhalev, Aleksandr
2017-10-18
We introduce a definition of the volume of a general rectangular matrix, which is equivalent to an absolute value of the determinant for square matrices. We generalize results of square maximum-volume submatrices to the rectangular case, show a connection of the rectangular volume with an optimal experimental design and provide estimates for a growth of coefficients and an approximation error in spectral and Chebyshev norms. Three promising applications of such submatrices are presented: recommender systems, finding maximal elements in low-rank matrices and preconditioning of overdetermined linear systems. The code is available online.
Uncertainty quantification and error analysis
Energy Technology Data Exchange (ETDEWEB)
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Error Patterns in Problem Solving.
Babbitt, Beatrice C.
Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…
Performance, postmodernity and errors
DEFF Research Database (Denmark)
Harder, Peter
2013-01-01
speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
Counting Zero: Rethinking Feminist Epistemologies
Directory of Open Access Journals (Sweden)
Xin Liu
2017-10-01
Full Text Available This article concerns feminist engagements with epistemologies. Feminist epistemologies have revealed and challenged the exclusions and denigrations at work in knowledge production processes. And yet, the emphasis on the partiality of knowledge and the non-innocence of any subject position also cast doubt on the possibility of feminist political communities. In view of this, it has been argued that the very parameter of epistemology poses limitations for feminism, for it leads to either political paralysis or prescriptive politics that in fact undoes the political of politics. From a different perspective, decolonial feminists argue for radical epistemic disobedience and delinking the move beyond the confines of Western systems of knowledge and its extractive knowledge economy. Nevertheless, the oppositional logic informs both feminist epistemologies and its critiques, which I argue is symptomatic of the epistemic habits of academic feminism. This article ends with a preliminary reconsideration of the question of origin through the figure of zero. It asks whether it might be possible to conceive of feminist epistemologies as performing the task of counting zero – accounting for origin, wholeness, and universality – that takes into account specificities without forfeiting coalition and claims to knowledge.
Maximum Power Point Tracking Based on Sliding Mode Control
Directory of Open Access Journals (Sweden)
Nimrod Vázquez
2015-01-01
Full Text Available Solar panels, which have become a good choice, are used to generate and supply electricity in commercial and residential applications. This generated power starts with the solar cells, which have a complex relationship between solar irradiation, temperature, and output power. For this reason a tracking of the maximum power point is required. Traditionally, this has been made by considering just current and voltage conditions at the photovoltaic panel; however, temperature also influences the process. In this paper the voltage, current, and temperature in the PV system are considered to be a part of a sliding surface for the proposed maximum power point tracking; this means a sliding mode controller is applied. Obtained results gave a good dynamic response, as a difference from traditional schemes, which are only based on computational algorithms. A traditional algorithm based on MPPT was added in order to assure a low steady state error.
An Economical Fast Discriminator for Nuclear Pulse Counting
International Nuclear Information System (INIS)
Issarachai, Opas; Punnachaiya, Suvit
2009-07-01
Full text: This research work was aimed to develop a fast discriminator at low cost but high capability for discrimination a nanosecond nuclear pulse. The fast discriminator can be used in association with fast photon counting system. The designed structure consisted of the ultra-fast voltage comparator using ADCMP601 integrated circuit, the monostable multivibrator with controllable pulse width output by propagation delay of logic gate, and the fast response buffer amplifier. The tested results of pulse height discrimination of 0-5 V nuclear pulse with 20 ns (FWHM) pulse width showed the correlation coefficient (R 2 ) between discrimination level and pulse height was 0.998, while the pulse rate more than 10 MHz could be counted. The 30 ns logic pulse width output revealed high stable and could be smoothly driven to low impedance load at 50 Ω. For pulse signal transmission to the counter, it was also found that the termination of reflected signal must be considered because it may cause pulse counting error
Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.
Bartley, David; Slaven, James; Harper, Martin
2017-03-01
The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.
Validation of an automated colony counting system for group A Streptococcus.
Frost, H R; Tsoi, S K; Baker, C A; Laho, D; Sanderson-Smith, M L; Steer, A C; Smeesters, P R
2016-02-08
The practice of counting bacterial colony forming units on agar plates has long been used as a method to estimate the concentration of live bacteria in culture. However, due to the laborious and potentially error prone nature of this measurement technique, an alternative method is desirable. Recent technologic advancements have facilitated the development of automated colony counting systems, which reduce errors introduced during the manual counting process and recording of information. An additional benefit is the significant reduction in time taken to analyse colony counting data. Whilst automated counting procedures have been validated for a number of microorganisms, the process has not been successful for all bacteria due to the requirement for a relatively high contrast between bacterial colonies and growth medium. The purpose of this study was to validate an automated counting system for use with group A Streptococcus (GAS). Twenty-one different GAS strains, representative of major emm-types, were selected for assessment. In order to introduce the required contrast for automated counting, 2,3,5-triphenyl-2H-tetrazolium chloride (TTC) dye was added to Todd-Hewitt broth with yeast extract (THY) agar. Growth on THY agar with TTC was compared with growth on blood agar and THY agar to ensure the dye was not detrimental to bacterial growth. Automated colony counts using a ProtoCOL 3 instrument were compared with manual counting to confirm accuracy over the stages of the growth cycle (latent, mid-log and stationary phases) and in a number of different assays. The average percentage differences between plating and counting methods were analysed using the Bland-Altman method. A percentage difference of ±10 % was determined as the cut-off for a critical difference between plating and counting methods. All strains measured had an average difference of less than 10 % when plated on THY agar with TTC. This consistency was also observed over all phases of the growth
Heterogeneous counting on filter support media
International Nuclear Information System (INIS)
Long, E.; Kohler, V.; Kelly, M.J.
1976-01-01
Many investigators in the biomedical research area have used filter paper as the support for radioactive samples. This means that a heterogeneous counting of sample sometimes results. The count rate of a sample on a filter will be affected by positioning, degree of dryness, sample application procedure, the type of filter, and the type of cocktail used. Positioning of the filter (up or down) in the counting vial can cause a variation of 35% or more when counting tritiated samples on filter paper. Samples of varying degrees of dryness when added to the counting cocktail can cause nonreproducible counts if handled improperly. Count rates starting at 2400 CPM initially can become 10,000 CPM in 24 hours for 3 H-DNA (deoxyribonucleic acid) samples dried on standard cellulose acetate membrane filters. Data on cellulose nitrate filters show a similar trend. Sample application procedures in which the sample is applied to the filter in a small spot or on a large amount of the surface area can cause nonreproducible or very low counting rates. A tritiated DNA sample, when applied topically, gives a count rate of 4,000 CPM. When the sample is spread over the whole filter, 13,400 CPM are obtained with a much better coefficient of variation (5% versus 20%). Adding protein carrier (bovine serum albumin-BSA) to the sample to trap more of the tritiated DNA on the filter during the filtration process causes a serious beta absorption problem. Count rates which are one-fourth the count rate applied to the filter are obtained on calibrated runs. Many of the problems encountered can be alleviated by a proper choice of filter and the use of a liquid scintillation cocktail which dissolves the filter. Filter-Solv has been used to dissolve cellulose nitrate filters and filters which are a combination of cellulose nitrate and cellulose acetate. Count rates obtained for these dissolved samples are very reproducible and highly efficient
Leveraging multiple datasets for deep leaf counting
Dobrescu, Andrei; Giuffrida, Mario Valerio; Tsaftaris, Sotirios A
2017-01-01
The number of leaves a plant has is one of the key traits (phenotypes) describing its development and growth. Here, we propose an automated, deep learning based approach for counting leaves in model rosette plants. While state-of-the-art results on leaf counting with deep learning methods have recently been reported, they obtain the count as a result of leaf segmentation and thus require per-leaf (instance) segmentation to train the models (a rather strong annotation). Instead, our method tre...
Maximum Power from a Solar Panel
Directory of Open Access Journals (Sweden)
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Resonance ionization spectroscopy: Counting noble gas atoms
International Nuclear Information System (INIS)
Hurst, G.S.; Payne, M.G.; Chen, C.H.; Willis, R.D.; Lehmann, B.E.; Kramer, S.D.
1981-01-01
The purpose of this paper is to describe new work on the counting of noble gas atoms, using lasers for the selective ionization and detectors for counting individual particles (electrons or positive ions). When positive ions are counted, various kinds of mass analyzers (magnetic, quadrupole, or time-of-flight) can be incorporated to provide A selectivity. We show that a variety of interesting and important applications can be made with atom-counting techniques which are both atomic number (Z) and mass number (A) selective. (orig./FKS)
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-03-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Calibration of a liquid scintillation counter for alpha, beta and Cerenkov counting
International Nuclear Information System (INIS)
Scarpitta, S.C.; Fisenne, I.M.
1996-07-01
Calibration data are presented for 25 radionuclides that were individually measured in a Packard Tri-Carb 2250CA liquid scintillation (LS) counter by both conventional and Cerenkov detection techniques. The relationships and regression data between the quench indicating parameters and the LS counting efficiencies were determined using microliter amounts of tracer added to low 40 K borosilicate glass vials containing 15 mL of Insta-Gel XF scintillation cocktail. Using 40 K, the detection efficiencies were linear over a three order of magnitude range (10 - 10,000 mBq) in beta activity for both LS and Cerenkov counting. The Cerenkov counting efficiency (CCE) increased linearly (42% per MeV) from 0.30 to 2.0 MeV, whereas the LS efficiency was >90% for betas with energy in excess of 0.30 MeV. The CCE was 20 - 50% less than the LS counting efficiency for beta particles with maximum energies in excess of 1 MeV. Based on replicate background measurements, the lower limit of detection (LLD) for a 1-h count at the 95% confidence level, using water as a solvent, was 0.024 counts sec- -1 and 0.028 counts sec-1 for plastic and glass vials, respectively. The LLD for a 1-h-count ranged from 46 to 56 mBq (2.8 - 3.4 dpm) for both Cerenkov and conventional LS counting. This assumes: (1) a 100% counting efficiency, (2) a 50% yield of the nuclide of interest, (3) a 1-h measurement time using low background plastic vials, and (4) a 0-50 keV region of interest. The LLD is reduced an order of magnitude when the yield recovery exceeds 90% and a lower background region is used (i.e., 100 - 500 keV alpha region of interest). Examples and applications of both Cerenkov and LS counting techniques are given in the text and appendices
Controlling errors in unidosis carts
Directory of Open Access Journals (Sweden)
Inmaculada Díaz Fernández
2010-01-01
Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.
Prioritising interventions against medication errors
DEFF Research Database (Denmark)
Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard
errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...
Social aspects of clinical errors.
Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave
2009-08-01
Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.
International Nuclear Information System (INIS)
Strand, P.; Selnaes, T.D.
1990-01-01
In order to determine the doses from radiocesium in foods after the Chernobyl accident, four groups were chosen in 1987. Two groups, presumed to have a large consumption of food items with a high radiocesium content, were selected. These were Lapp reindeer breeders from central parts of Norway, and hunters a.o. from the municipality of Oeystre Slidre. Two other groups were randomly selected, one from the municipality of Sel, and one from Oslo. The persons in these two groups were presumed to have an average diet. The fall-out in Sel was fairly large (100 kBq/m 2 ), whereas in Oslo the fall-out level was low (2 kBq/m 2 ). The persons in each group were monitored once a year with whole-body counters, and in connection with these countings dietary surveys were preformed. In 1990 the Sel-group and the Lapps in central parts of Norway were followed. Average whole-body activity in each group is compared to earlier years's results, and an average yearly effective dose equivalent is computed. The Sel-group has an average whole-body activity of 2800 Bq for men, and 690 Bq for women. Compared to earlier years, there is a steady but slow decrease in whole-body activities. Yearly dose is calculated to 0.06 mSv for 1990. The Lapps in central parts of Norway have an average whole-body content of 23800 Bq for men and 13600 Bq for women. This results in an average yearly dose of 0.9 mSv for the individuals in the group. Compared to earlier years, the Lapp group show a decrease in whole-body contents since 1988. This decrease is larger among men than women. 5 refs., 8 figs., 6 tabs
Repeat-aware modeling and correction of short read errors.
Yang, Xiao; Aluru, Srinivas; Dorman, Karin S
2011-02-15
High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors
Counting in Lattices: Combinatorial Problems from Statistical Mechanics.
Randall, Dana Jill
In this thesis we consider two classical combinatorial problems arising in statistical mechanics: counting matchings and self-avoiding walks in lattice graphs. The first problem arises in the study of the thermodynamical properties of monomers and dimers (diatomic molecules) in crystals. Fisher, Kasteleyn and Temperley discovered an elegant technique to exactly count the number of perfect matchings in two dimensional lattices, but it is not applicable for matchings of arbitrary size, or in higher dimensional lattices. We present the first efficient approximation algorithm for computing the number of matchings of any size in any periodic lattice in arbitrary dimension. The algorithm is based on Monte Carlo simulation of a suitable Markov chain and has rigorously derived performance guarantees that do not rely on any assumptions. In addition, we show that these results generalize to counting matchings in any graph which is the Cayley graph of a finite group. The second problem is counting self-avoiding walks in lattices. This problem arises in the study of the thermodynamics of long polymer chains in dilute solution. While there are a number of Monte Carlo algorithms used to count self -avoiding walks in practice, these are heuristic and their correctness relies on unproven conjectures. In contrast, we present an efficient algorithm which relies on a single, widely-believed conjecture that is simpler than preceding assumptions and, more importantly, is one which the algorithm itself can test. Thus our algorithm is reliable, in the sense that it either outputs answers that are guaranteed, with high probability, to be correct, or finds a counterexample to the conjecture. In either case we know we can trust our results and the algorithm is guaranteed to run in polynomial time. This is the first algorithm for counting self-avoiding walks in which the error bounds are rigorously controlled. This work was supported in part by an AT&T graduate fellowship, a University of
Errors in clinical laboratories or errors in laboratory medicine?
Plebani, Mario
2006-01-01
Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes
Novel Photon-Counting Detectors for Free-Space Communication
Krainak, Michael A.; Yang, Guan; Sun, Xiaoli; Lu, Wei; Merritt, Scott; Beck, Jeff
2016-01-01
We present performance data for novel photon counting detectors for free space optical communication. NASA GSFC is testing the performance of three novel photon counting detectors 1) a 2x8 mercury cadmium telluride avalanche array made by DRS Inc. 2) a commercial 2880 silicon avalanche photodiode array and 3) a prototype resonant cavity silicon avalanche photodiode array. We will present and compare dark count, photon detection efficiency, wavelength response and communication performance data for these detectors. We discuss system wavelength trades and architectures for optimizing overall communication link sensitivity, data rate and cost performance. The HgCdTe APD array has photon detection efficiencies of greater than 50 were routinely demonstrated across 5 arrays, with one array reaching a maximum PDE of 70. High resolution pixel-surface spot scans were performed and the junction diameters of the diodes were measured. The junction diameter was decreased from 31 m to 25 m resulting in a 2x increase in e-APD gain from 470 on the 2010 array to 1100 on the array delivered to NASA GSFC. Mean single photon SNRs of over 12 were demonstrated at excess noise factors of 1.2-1.3.The commercial silicon APD array has a fast output with rise times of 300ps and pulse widths of 600ps. Received and filtered signals from the entire array are multiplexed onto this single fast output. The prototype resonant cavity silicon APD array is being developed for use at 1 micron wavelength.
Fast radio burst event rate counts - I. Interpreting the observations
Macquart, J.-P.; Ekers, R. D.
2018-02-01
The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.
System and method of liquid scintillation counting
International Nuclear Information System (INIS)
Rapkin, E.
1977-01-01
A method of liquid scintillation counting utilizing a combustion step to overcome quenching effects comprises novel features of automatic sequential introduction of samples into a combustion zone and automatic sequential collection and delivery of combustion products into a counting zone. 37 claims, 13 figures
Is It Counting, or Is It Adding?
Eisenhardt, Sara; Fisher, Molly H.; Thomas, Jonathan; Schack, Edna O.; Tassell, Janet; Yoder, Margaret
2014-01-01
The Common Core State Standards for Mathematics (CCSSI 2010) expect second grade students to "fluently add and subtract within 20 using mental strategies" (2.OA.B.2). Most children begin with number word sequences and counting approximations and then develop greater skill with counting. But do all teachers really understand how this…
Current status of liquid scintillation counting
International Nuclear Information System (INIS)
Klingler, G.W.
1981-01-01
Scintillation counting of alpha particles has been used since the turn of the century. The advent of pulse shape discrimination has made this method of detection accurate and reliable. The history, concepts and development of scintillation counting and pulse shape discrimination are discussed. A brief look at the ongoing work in the consolidation of components now used for pulse shape discrimination is included
Lazy reference counting for the Microgrid
Poss, R.; Grelck, C.; Herhut, S.; Scholz, S.-B.
2012-01-01
This papers revisits non-deferred reference counting, a common technique to ensure that potentially shared large heap objects can be reused safely when they are both input and output to computations. Traditionally, thread-safe reference counting exploit implicit memory-based communication of counter
A Word Count of Modern Arabic Prose.
Landau, Jacob M.
This book presents a word count of Arabic prose based on 60 twentieth-century Egyptian books. The text is divided into an alphabetical list and a word frequency list. This word count is intended as an aid in the: (1) writing of primers and the compilation of graded readers, (2) examination of the vocabulary selection of primers and readers…
2013 Kids Count in Colorado! Community Matters
Colorado Children's Campaign, 2013
2013-01-01
"Kids Count in Colorado!" is an annual publication of the Children's Campaign, providing state and county level data on child well-being factors including child health, education, and economic status. Since its first release 20 years ago, "Kids Count in Colorado!" has become the most trusted source for data and information on…
Errors in abdominal computed tomography
International Nuclear Information System (INIS)
Stephens, S.; Marting, I.; Dixon, A.K.
1989-01-01
Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab
Eutectic cell and nodule count as the quality factors of cast iron
Directory of Open Access Journals (Sweden)
E. Fraś
2008-10-01
Full Text Available In this work the predictions based on a theoretical analysis aimed at elucidating of eutectic cell count or nodule counts N wereexperimentally verified. The experimental work was focused on processing flake graphite and ductile iron under various inoculationconditions in order to achieve various physicochemical states of the experimental melts. In addition, plates of various wall thicknesses, s were cast and the resultant eutectic cell or nodule counts were established. Moreover, thermal analysis was used to find out the degree of maximum undercooling for the graphite eutectic, Tm. A relationship was found between the eutectic cell or nodule count and the maximum undercooling Tm.. In addition it was also found that N can be related to the wall thickness of plate shaped castings. Finally, the present work provides a rational for the effect of technological factors such as the melt chemistry, inoculation practice, and holding temperature and time on the resultant cell count or nodule count of cast iron. In particular, good agreement was found between the predictions of the theoretical analysis and the experimental data.
Maximum permissible voltage of YBCO coated conductors
Energy Technology Data Exchange (ETDEWEB)
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Laboratory errors and patient safety.
Miligy, Dawlat A
2015-01-01
Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that
Measurement Model Specification Error in LISREL Structural Equation Models.
Baldwin, Beatrice; Lomax, Richard
This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…
Application of Joint Error Maximal Mutual Compensation to hexapod robots
DEFF Research Database (Denmark)
Veryha, Yauheni; Petersen, Henrik Gordon
2008-01-01
A good practice to ensure high-positioning accuracy in industrial robots is to use joint error maximum mutual compensation (JEMMC). This paper presents an application of JEMMC for positioning of hexapod robots to improve end-effector positioning accuracy. We developed an algorithm and simulation ...
Error Immune Logic for Low-Power Probabilistic Computing
Directory of Open Access Journals (Sweden)
Bo Marr
2010-01-01
design for the maximum amount of energy savings per a given error rate. Spice simulation results using a commercially available and well-tested 0.25 μm technology are given verifying the ultra-low power, probabilistic full-adder designs. Further, close to 6X energy savings is achieved for a probabilistic full-adder over the deterministic case.
Information loss for 2 × 2 tables with missing cell counts: binomial case
Eisinga, R.N.
2008-01-01
We formulate likelihood-based ecological inference for 2 × 2 tables with missing cell counts as an incomplete data problem and study Fisher information loss by comparing estimation from complete and incomplete data. In so doing, we consider maximum-likelihood (ML) estimators of probabilities
Information loss for 2×2 tables with missing cell counts : binomial case
Eisinga, Rob
2008-01-01
We formulate likelihood-based ecological inference for 2×2 tables with missing cell counts as an incomplete data problem and study Fisher information loss by comparing estimation from complete and incomplete data. In so doing, we consider maximum-likelihood (ML) estimators of probabilities governed
International Nuclear Information System (INIS)
Oesterle, S.N.; Norman, J.E. Jr.
1980-01-01
Total peripheral blood lymphocytes were evaluated by age and exposure status in the Adult Health Study population during three examination cycles between 1958 and 1972. No radiation effect was observed, but a significant drop in the absolute lymphocyte counts of those aged 70 years and over and a corresponding maximum for persons aged 50 - 59 was observed. (author)
Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'
International Nuclear Information System (INIS)
Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi
1996-01-01
To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)
Real-time detection and elimination of nonorthogonality error in interference fringe processing
International Nuclear Information System (INIS)
Hu Haijiang; Zhang Fengdeng
2011-01-01
In the measurement system of interference fringe, the nonorthogonality error is a main error source that influences the precision and accuracy of the measurement system. The detection and elimination of the error has been an important target. A novel method that only uses the cross-zero detection and the counting is proposed to detect and eliminate the nonorthogonality error in real time. This method can be simply realized by means of the digital logic device, because it does not invoke trigonometric functions and inverse trigonometric functions. And it can be widely used in the bidirectional subdivision systems of a Moire fringe and other optical instruments.
Ellis, Sam; Reader, Andrew J
2018-04-26
reconstructions with increased counts levels. In tumour regions each method produces subtly different results in terms of preservation of tumour quantification and reconstruction root mean-squared error (RMSE). In particular, in the two-scan simulations, the DE-PML method produced tumour means in close agreement with MLEM reconstructions, while the DTV-PML method produced the lowest errors due to noise reduction within the tumour. Across a range of tumour responses and different numbers of scans, similar results were observed, with DTV-PML producing the lowest errors of the three priors and DE-PML producing the lowest bias. Similar improvements were observed in the reconstructions of the real longitudinal datasets, although imperfect alignment of the two PET images resulted in additional changes in the difference image that affected the performance of the proposed methods. Reconstruction of longitudinal datasets by penalising difference images between pairs of scans from a data series allows for noise reduction in all reconstructed images. An appropriate choice of penalty term and penalty strength allows for this noise reduction to be achieved while maintaining reconstruction performance in regions of change, either in terms of quantification of mean intensity via DE-PML, or in terms of tumour RMSE via DTV-PML. Overall, improving the image quality of longitudinal datasets via simultaneous reconstruction has the potential to improve upon currently used methods, allow dose reduction, or reduce scan time while maintaining image quality at current levels. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Dopamine reward prediction error coding.
Schultz, Wolfram
2016-03-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.
Statistical errors in Monte Carlo estimates of systematic errors
Roe, Byron P.
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.
Statistical errors in Monte Carlo estimates of systematic errors
Energy Technology Data Exchange (ETDEWEB)
Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.
Statistical errors in Monte Carlo estimates of systematic errors
International Nuclear Information System (INIS)
Roe, Byron P.
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2
Efficient algorithms for maximum likelihood decoding in the surface code
Bravyi, Sergey; Suchara, Martin; Vargo, Alexander
2014-09-01
We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.
Secure count query on encrypted genomic data.
Hasan, Mohammad Zahidul; Mahdi, Md Safiur Rahman; Sadat, Md Nazmus; Mohammed, Noman
2018-05-01
Human genomic information can yield more effective healthcare by guiding medical decisions. Therefore, genomics research is gaining popularity as it can identify potential correlations between a disease and a certain gene, which improves the safety and efficacy of drug treatment and can also develop more effective prevention strategies [1]. To reduce the sampling error and to increase the statistical accuracy of this type of research projects, data from different sources need to be brought together since a single organization does not necessarily possess required amount of data. In this case, data sharing among multiple organizations must satisfy strict policies (for instance, HIPAA and PIPEDA) that have been enforced to regulate privacy-sensitive data sharing. Storage and computation on the shared data can be outsourced to a third party cloud service provider, equipped with enormous storage and computation resources. However, outsourcing data to a third party is associated with a potential risk of privacy violation of the participants, whose genomic sequence or clinical profile is used in these studies. In this article, we propose a method for secure sharing and computation on genomic data in a semi-honest cloud server. In particular, there are two main contributions. Firstly, the proposed method can handle biomedical data containing both genotype and phenotype. Secondly, our proposed index tree scheme reduces the computational overhead significantly for executing secure count query operation. In our proposed method, the confidentiality of shared data is ensured through encryption, while making the entire computation process efficient and scalable for cutting-edge biomedical applications. We evaluated our proposed method in terms of efficiency on a database of Single-Nucleotide Polymorphism (SNP) sequences, and experimental results demonstrate that the execution time for a query of 50 SNPs in a database of 50,000 records is approximately 5 s, where each record
Architecture design for soft errors
Mukherjee, Shubu
2008-01-01
This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.
Dopamine reward prediction error coding
Schultz, Wolfram
2016-01-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...
Energy Technology Data Exchange (ETDEWEB)
Boehnke, E McKenzie; DeMarco, J; Steers, J; Fraass, B [Cedars-Sinai Medical Center, Los Angeles, CA (United States)
2016-06-15
Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readings are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.
Directory of Open Access Journals (Sweden)
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Spatial data modelling and maximum entropy theory
Czech Academy of Sciences Publication Activity Database
Klimešová, Dana; Ocelíková, E.
2005-01-01
Roč. 51, č. 2 (2005), s. 80-83 ISSN 0139-570X Institutional research plan: CEZ:AV0Z10750506 Keywords : spatial data classification * distribution function * error distribution Subject RIV: BD - Theory of Information
Identifying Error in AUV Communication
National Research Council Canada - National Science Library
Coleman, Joseph; Merrill, Kaylani; O'Rourke, Michael; Rajala, Andrew G; Edwards, Dean B
2006-01-01
Mine Countermeasures (MCM) involving Autonomous Underwater Vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel...
Human Errors in Decision Making
Mohamad, Shahriari; Aliandrina, Dessy; Feng, Yan
2005-01-01
The aim of this paper was to identify human errors in decision making process. The study was focused on a research question such as: what could be the human error as a potential of decision failure in evaluation of the alternatives in the process of decision making. Two case studies were selected from the literature and analyzed to find the human errors contribute to decision fail. Then the analysis of human errors was linked with mental models in evaluation of alternative step. The results o...
Finding beam focus errors automatically
International Nuclear Information System (INIS)
Lee, M.J.; Clearwater, S.H.; Kleban, S.D.
1987-01-01
An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors
Heuristic errors in clinical reasoning.
Rylander, Melanie; Guerrasio, Jeannette
2016-08-01
Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.
Energy Technology Data Exchange (ETDEWEB)
Shcheslavskiy, V., E-mail: vis@becker-hickl.de; Becker, W. [Becker & Hickl GmbH, Nahmitzer Damm 30, 12277 Berlin (Germany); Morozov, P.; Divochiy, A. [Scontel, Rossolimo St., 5/22-1, Moscow 119021 (Russian Federation); Vakhtomin, Yu. [Scontel, Rossolimo St., 5/22-1, Moscow 119021 (Russian Federation); Moscow State Pedagogical University, 1/1 M. Pirogovskaya St., Moscow 119991 (Russian Federation); Smirnov, K. [Scontel, Rossolimo St., 5/22-1, Moscow 119021 (Russian Federation); Moscow State Pedagogical University, 1/1 M. Pirogovskaya St., Moscow 119991 (Russian Federation); National Research University Higher School of Economics, 20 Myasnitskaya St., Moscow 101000 (Russian Federation)
2016-05-15
Time resolution is one of the main characteristics of the single photon detectors besides quantum efficiency and dark count rate. We demonstrate here an ultrafast time-correlated single photon counting (TCSPC) setup consisting of a newly developed single photon counting board SPC-150NX and a superconducting NbN single photon detector with a sensitive area of 7 × 7 μm. The combination delivers a record instrument response function with a full width at half maximum of 17.8 ps and system quantum efficiency ∼15% at wavelength of 1560 nm. A calculation of the root mean square value of the timing jitter for channels with counts more than 1% of the peak value yielded about 7.6 ps. The setup has also good timing stability of the detector–TCSPC board.
Data Analysis & Statistical Methods for Command File Errors
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
A Hybrid Unequal Error Protection / Unequal Error Resilience ...
African Journals Online (AJOL)
The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...
Revealing the Maximum Strength in Nanotwinned Copper
DEFF Research Database (Denmark)
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Modelling maximum canopy conductance and transpiration in ...
African Journals Online (AJOL)
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
Digital Counts of Maize Plants by Unmanned Aerial Vehicles (UAVs
Directory of Open Access Journals (Sweden)
Friederike Gnädinger
2017-05-01
Full Text Available Precision phenotyping, especially the use of image analysis, allows researchers to gain information on plant properties and plant health. Aerial image detection with unmanned aerial vehicles (UAVs provides new opportunities in precision farming and precision phenotyping. Precision farming has created a critical need for spatial data on plant density. The plant number reflects not only the final field emergence but also allows a more precise assessment of the final yield parameters. The aim of this work is to advance UAV use and image analysis as a possible high-throughput phenotyping technique. In this study, four different maize cultivars were planted in plots with different seeding systems (in rows and equidistantly spaced and different nitrogen fertilization levels (applied at 50, 150 and 250 kg N/ha. The experimental field, encompassing 96 plots, was overflown at a 50-m height with an octocopter equipped with a 10-megapixel camera taking a picture every 5 s. Images were recorded between BBCH 13–15 (it is a scale to identify the phenological development stage of a plant which is here the 3- to 5-leaves development stage when the color of young leaves differs from older leaves. Close correlations up to R2 = 0.89 were found between in situ and image-based counted plants adapting a decorrelation stretch contrast enhancement procedure, which enhanced color differences in the images. On average, the error between visually and digitally counted plants was ≤5%. Ground cover, as determined by analyzing green pixels, ranged between 76% and 83% at these stages. However, the correlation between ground cover and digitally counted plants was very low. The presence of weeds and blurry effects on the images represent possible errors in counting plants. In conclusion, the final field emergence of maize can rapidly be assessed and allows more precise assessment of the final yield parameters. The use of UAVs and image processing has the potential to
Error studies for SNS Linac. Part 1: Transverse errors
International Nuclear Information System (INIS)
Crandall, K.R.
1998-01-01
The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)
Energy Technology Data Exchange (ETDEWEB)
Balpardo, C., E-mail: balpardo@cae.cnea.gov.a [Laboratorio de Metrologia de Radioisotopos, CNEA, Buenos Aires (Argentina); Capoulat, M.E.; Rodrigues, D.; Arenillas, P. [Laboratorio de Metrologia de Radioisotopos, CNEA, Buenos Aires (Argentina)
2010-07-15
The nuclide {sup 241}Am decays by alpha emission to {sup 237}Np. Most of the decays (84.6%) populate the excited level of {sup 237}Np with energy of 59.54 keV. Digital coincidence counting was applied to standardize a solution of {sup 241}Am by alpha-gamma coincidence counting with efficiency extrapolation. Electronic discrimination was implemented with a pressurized proportional counter and the results were compared with two other independent techniques: Liquid scintillation counting using the logical sum of double coincidences in a TDCR array and defined solid angle counting taking into account activity inhomogeneity in the active deposit. The results show consistency between the three methods within a limit of a 0.3%. An ampoule of this solution will be sent to the International Reference System (SIR) during 2009. Uncertainties were analysed and compared in detail for the three applied methods.
An Adaptive Smoother for Counting Measurements
International Nuclear Information System (INIS)
Kondrasovs Vladimir; Coulon Romain; Normand Stephane
2013-06-01
Counting measurements associated with nuclear instruments are tricky to carry out due to the stochastic process of the radioactivity. Indeed events counting have to be processed and filtered in order to display a stable count rate value and to allow variations monitoring in the measured activity. Smoothers (as the moving average) are adjusted by a time constant defined as a compromise between stability and response time. A new approach has been developed and consists in improving the response time while maintaining count rate stability. It uses the combination of a smoother together with a detection filter. A memory of counting data is processed to calculate several count rate estimates using several integration times. These estimates are then sorted into the memory from short to long integration times. A measurement position, in terms of integration time, is then chosen into this memory after a detection test. An inhomogeneity into the Poisson counting process is detected by comparison between current position estimate and the other estimates contained into the memory in respect with the associated statistical variance calculated with homogeneous assumption. The measurement position (historical time) and the ability to forget an obsolete data or to keep in memory a useful data are managed using the detection test result. The proposed smoother is then an adaptive and a learning algorithm allowing an optimization of the response time while maintaining measurement counting stability and converging efficiently to the best counting estimate after an effective change in activity. This algorithm has also the specificity to be low recursive and thus easily embedded into DSP electronics based on FPGA or micro-controllers meeting 'real life' time requirements. (authors)
Error begat error: design error analysis and prevention in social infrastructure projects.
Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M
2012-09-01
Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.
Radiation Counting System Software Using Visual Basic
International Nuclear Information System (INIS)
Nanda Nagara; Didi Gayani
2009-01-01
It has been created a Gamma Radiation Counting System using interface card, which paired with Personal Computer (PC) and operated by the Visual Basic program. The program was set through varied menu selections such as ”Multi Counting” , ”Counting and Record” and ”View Data”. An interface card for data acquisition was formed by using AMD9513 components as a counter and timer which can be programmed. This counting system was tested and used in waste facility in PTNBR and the result is quite good. (author)
Algorithm for counting large directed loops
Energy Technology Data Exchange (ETDEWEB)
Bianconi, Ginestra [Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34014 Trieste (Italy); Gulbahce, Natali [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, NM 87545 (United States)
2008-06-06
We derive a Belief-Propagation algorithm for counting large loops in a directed network. We evaluate the distribution of the number of small loops in a directed random network with given degree sequence. We apply the algorithm to a few characteristic directed networks of various network sizes and loop structures and compare the algorithm with exhaustive counting results when possible. The algorithm is adequate in estimating loop counts for large directed networks and can be used to compare the loop structure of directed networks and their randomized counterparts.
Counts and colors of faint galaxies
International Nuclear Information System (INIS)
Kron, R.G.
1980-01-01
The color distribution of faint galaxies is an observational dimension which has not yet been fully exploited, despite the important constraints obtainable for galaxy evolution and cosmology. Number-magnitude counts alone contain very diluted information about the state of things because galaxies from a wide range in redshift contribute to the counts at each magnitude. The most-frequently-seen type of galaxy depends on the luminosity function and the relative proportions of galaxies of different spectral classes. The addition of color as a measured quantity can thus considerably sharpen the interpretation of galaxy counts since the apparent color depends on the redshift and rest-frame spectrum. (Auth.)
International Nuclear Information System (INIS)
Key, R.M.
1977-01-01
In a recent report Sarmiento et al.(1976) presented a method for estimating the statistical error associated with 222 Rn scintillation counting. Because of certain approximations, the method is less accurate than that of an earlier work by Lucas and Woodward (1964). The Sarmiento method and the Lucas method are compared, and the magnitude of errors incurred using the approximations are determined. For counting times greater than 300 minutes, the disadvantage of the slight inaccuracies of the Sarmiento method are outweighed by the advantage of easier calculation. (Auth.)
Dual Processing and Diagnostic Errors
Norman, Geoff
2009-01-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…
Barriers to medical error reporting
Directory of Open Access Journals (Sweden)
Jalal Poorolajal
2015-01-01
Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1981-01-01
Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Cognitive aspect of diagnostic errors.
Phua, Dong Haur; Tan, Nigel C K
2013-01-01
Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.
Maximum parsimony, substitution model, and probability phylogenetic trees.
Weng, J F; Thomas, D A; Mareels, I
2011-01-01
The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.
MXLKID: a maximum likelihood parameter identifier
International Nuclear Information System (INIS)
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
A study of the effect of measurement error in predictor variables in nondestructive assay
International Nuclear Information System (INIS)
Burr, Tom L.; Knepper, Paula L.
2000-01-01
It is not widely known that ordinary least squares estimates exhibit bias if there are errors in the predictor variables. For example, enrichment measurements are often fit to two predictors: Poisson-distributed count rates in the region of interest and in the background. Both count rates have at least random variation due to counting statistics. Therefore, the parameter estimates will be biased. In this case, the effect of bias is a minor issue because there is almost no interest in the parameters themselves. Instead, the parameters will be used to convert count rates into estimated enrichment. In other cases, this bias source is potentially more important. For example, in tomographic gamma scanning, there is an emission stage which depends on predictors (the 'system matrix') that are estimated with error during the transmission stage. In this paper, we provide background information for the impact and treatment of errors in predictors, present results of candidate methods of compensating for the effect, review some of the nondestructive assay situations where errors in predictors occurs, and provide guidance for when errors in predictors should be considered in nondestructive assay
A measurement technique for counting processes
International Nuclear Information System (INIS)
Cantoni, V.; Pavia Univ.; De Lotto, I.; Valenziano, F.
1980-01-01
A technique for the estimation of first and second order properties of a stationary counting process is presented here which uses standard instruments for analysis of a continuous stationary random signal. (orig.)
Why calories count: from science to politics
National Research Council Canada - National Science Library
Nestle, Marion; Nesheim, Malden C
2012-01-01
.... They are also hard to understand. In Why Calories Count, Marion Nestle and Malden Nesheim explain in clear and accessible language what calories are and how they work, both biologically and politically...
CoC Housing Inventory Count Reports
Department of Housing and Urban Development — Continuum of Care (CoC) Homeless Assistance Programs Housing Inventory Count Reports are a snapshot of a CoC’s housing inventory, available at the national and state...
White Blood Cell Counts and Malaria
National Research Council Canada - National Science Library
McKenzie, F. E; Prudhomme, Wendy A; Magill, Alan J; Forney, J. R; Permpanich, Barnyen; Lucas, Carmen; Gasser, Jr., Robert A; Wongsrichanalai, Chansuda
2005-01-01
White blood cells (WBCs) were counted in 4697 individuals who presented to outpatient malaria clinics in Maesod, Tak Province, Thailand, and Iquitos, Peru, between 28 May and 28 August 1998 and between 17 May and 9 July 1999...
VSRR Provisional Drug Overdose Death Counts
U.S. Department of Health & Human Services — This data contains provisional counts for drug overdose deaths based on a current flow of mortality data in the National Vital Statistics System. National...
Low white blood cell count and cancer
... gov/ency/patientinstructions/000675.htm Low white blood cell count and cancer To use the sharing features on this page, please enable JavaScript. White blood cells (WBCs) fight infections from bacteria, viruses, fungi, and ...
Count rate effect in proportional counters
International Nuclear Information System (INIS)
Bednarek, B.
1980-01-01
A new concept is presented explaining changes in spectrometric parameters of proportional counters which occur due to varying count rate. The basic feature of this concept is that the gas gain of the counter remains constant in a wide range of count rate and that the decrease in the pulse amplitude and the detorioration of the energy resolution observed are the results of changes in the shape of original current pulses generated in the active volume of the counter. In order to confirm the validity of this statement, measurements of the gas amplification factor have been made in a wide count rate range. It is shown that above a certain critical value the gas gain depends on both the operating voltage and the count rate. (author)
Atom counting with accelerator mass spectrometry
International Nuclear Information System (INIS)
Kutschera, Walter
1995-01-01
A brief review of the current status and some recent applications of accelerator mass spectrometry (AMS) are presented. Some connections to resonance ionization mass spectroscopy (RIS) as the alternate atom counting method are discussed
Analysis of counting data: Development of the SATLAS Python package
Gins, W.; de Groote, R. P.; Bissell, M. L.; Granados Buitrago, C.; Ferrer, R.; Lynch, K. M.; Neyens, G.; Sels, S.
2018-01-01
For the analysis of low-statistics counting experiments, a traditional nonlinear least squares minimization routine may not always provide correct parameter and uncertainty estimates due to the assumptions inherent in the algorithm(s). In response to this, a user-friendly Python package (SATLAS) was written to provide an easy interface between the data and a variety of minimization algorithms which are suited for analyzinglow, as well as high, statistics data. The advantage of this package is that it allows the user to define their own model function and then compare different minimization routines to determine the optimal parameter values and their respective (correlated) errors. Experimental validation of the different approaches in the package is done through analysis of hyperfine structure data of 203Fr gathered by the CRIS experiment at ISOLDE, CERN.
Preparation of source mounts for 4π counting
International Nuclear Information System (INIS)
Johnson, E.P.
1991-01-01
The 4πβ/γ counter in the ANSTO radioisotope standards laboratory at Lucas Heights constitutes part of the Australian national standard for radioactivity. Sources to be measured in the counter must be mounted on a substrate which is strong enough to withstand careful handling and transport. The substrate must also be electrically conducting to minimise counting errors caused by charging of the source, and it must have very low superficial density so that little or none of the radiation is absorbed. The entire process of fabrication of VYNS films, coating them with gold/palladium and transferring them to source mount rings, as carried out in the radioisotope standards laboratory, is documented. 3 refs., 2 tabs., 6 figs
Internal dosimetry by whole body counting techniques
International Nuclear Information System (INIS)
Sharma, R.C.
1995-01-01
Over decades, whole body counting and bioassay - the two principal methods of internal dosimetry have been most widely used to assess, limit and control the intakes of radioactive materials and the consequent internal doses by the workers in nuclear industry. This paper deals with the whole body counting techniques. The problems inherent in the interpretation of monitoring data and likely future directions of development in the assessments of internal doses by direct methods are outlined. (author). 14 refs., 9 figs., 1 tab
How to count an introduction to combinatorics
Allenby, RBJT
2010-01-01
What's It All About? What Is Combinatorics? Classic Problems What You Need to Know Are You Sitting Comfortably? Permutations and Combinations The Combinatorial Approach Permutations CombinationsApplications to Probability Problems The Multinomial Theorem Permutations and Cycles Occupancy Problems Counting the Solutions of Equations New Problems from Old A ""Reduction"" Theorem for the Stirling Numbers The Inclusion-Exclusion Principle Double Counting Derangements A Formula for the Stirling NumbersStirling and Catalan Numbers Stirling Numbers Permutations and Stirling Numbers Catalan Numbers Pa
Remote system for counting of nuclear pulses
International Nuclear Information System (INIS)
Nieves V, J.A.; Garcia H, J.M.; Aguilar B, M.A.
1999-01-01
In this work, it is describe technically the remote system for counting of nuclear pulses, an integral system of the project radiological monitoring in a petroleum distillation tower. The system acquires the counting of incident nuclear particles in a nuclear detector which process this information and send it in serial form, using the RS-485 toward a remote receiver, which can be a Personal computer or any other device capable to interpret the communication protocol. (Author)
International Nuclear Information System (INIS)
Dongming, L.; Shuhai, J.; Houwen, L.
2016-01-01
In order to routinely evaluate workers' internal exposure due to intake of radionuclides, a whole-body counter (WBC) at the Third Qinshan Nuclear Power Co. Ltd. (TQNPC) is used. Counting would typically occur immediately after a confirmed or suspected inhalation exposure. The counting geometry would differ as a result of the height of the individual being counted, which would result in over- or underestimated intake(s). In this study, Monte Carlo simulation was applied to evaluate the counting efficiency when performing a lung count using the WBC at the TQNPC. In order to validate the simulated efficiencies for lung counting, the WBC was benchmarked for various lung positions using a 137 Cs source. The results show that the simulated efficiencies are fairly consistent with the measured ones for 137 Cs, with a relative error of 0.289%. For a lung organ simulation, the discrepancy between the calibration phantom and the Chinese reference adult person (170 cm) was within 6% for peak energies ranging from 59.5 keV to 2000 keV. The relative errors vary from 4.63% to 8.41% depending on the person's height and photon energy. Therefore, the simulation technique is effective and practical for lung counting, which is difficult to calibrate using a physical phantom. (authors)
Recursive algorithms for phylogenetic tree counting.
Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J
2013-10-28
In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.
Seed counting system evaluation using arduino microcontroller
Directory of Open Access Journals (Sweden)
Paulo Fernando Escobar Paim
2018-01-01
Full Text Available The development of automated systems has been highlighted in the most diverse productive sectors, among them, the agricultural sector. These systems aim to optimize activities by increasing operational efficiency and quality of work. In this sense, the present work has the objective of evaluating a prototype developed for seed count in laboratory, using Arduino microcontroller. The prototype of the system for seed counting was built using a dosing mechanism commonly used in seeders, electric motor, Arduino Uno, light dependent resistor and light emitting diode. To test the prototype, a completely randomized design (CRD was used in a two-factorial scheme composed of three groups defined according to the number of seeds (500, 1000 and 1500 seeds tested, three speeds of the dosing disc that allowed the distribution in 17, 21 and 32 seeds per second, with 40 repetitions evaluating the seed counting prototype performance in different speeds. The prototype of the bench counter showed a moderate variability of seed number of counted within the nine tests and a high precision in the seed count on the distribution speeds of 17 and 21 seeds per second (s-1 up to 1500 seeds tested. Therefore, based on the observed results, the developed prototype presents itself as an excellent tool for counting seeds in laboratory.
International Nuclear Information System (INIS)
Okura, Yuki; Futamase, Toshifumi
2013-01-01
This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging, but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of ν ∼ 11.7.
Energy Technology Data Exchange (ETDEWEB)
Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S. [KIPAC, Stanford University, 452 Lomita Mall, Stanford, CA 94309 (United States); Kratochvil, J. M.; Huffenberger, K. M. [Department of Physics, University of Miami, Coral Gables, FL 33124 (United States); May, M. [Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Haiman, Z.; Jernigan, J. G., E-mail: djbard@slac.stanford.edu [Department of Astronomy and Astrophysics, Columbia University, New York, NY 10027 (United States); and others
2013-09-01
We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.
Maximum neutron flux in thermal reactors
International Nuclear Information System (INIS)
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
Maximum allowable load on wheeled mobile manipulators
International Nuclear Information System (INIS)
Habibnejad Korayem, M.; Ghariblu, H.
2003-01-01
This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy
Maximum phytoplankton concentrations in the sea
DEFF Research Database (Denmark)
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...
Development of an automated asbestos counting software based on fluorescence microscopy.
Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio
2015-01-01
An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
The maximum entropy method of moments and Bayesian probability theory
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Negative binomial mixed models for analyzing microbiome count data.
Zhang, Xinyan; Mallick, Himel; Tang, Zaixiang; Zhang, Lei; Cui, Xiangqin; Benson, Andrew K; Yi, Nengjun
2017-01-03
Recent advances in next-generation sequencing (NGS) technology enable researchers to collect a large volume of metagenomic sequencing data. These data provide valuable resources for investigating interactions between the microbiome and host environmental/clinical factors. In addition to the well-known properties of microbiome count measurements, for example, varied total sequence reads across samples, over-dispersion and zero-inflation, microbiome studies usually collect samples with hierarchical structures, which introduce correlation among the samples and thus further complicate the analysis and interpretation of microbiome count data. In this article, we propose negative binomial mixed models (NBMMs) for detecting the association between the microbiome and host environmental/clinical factors for correlated microbiome count data. Although having not dealt with zero-inflation, the proposed mixed-effects models account for correlation among the samples by incorporating random effects into the commonly used fixed-effects negative binomial model, and can efficiently handle over-dispersion and varying total reads. We have developed a flexible and efficient IWLS (Iterative Weighted Least Squares) algorithm to fit the proposed NBMMs by taking advantage of the standard procedure for fitting the linear mixed models. We evaluate and demonstrate the proposed method via extensive simulation studies and the application to mouse gut microbiome data. The results show that the proposed method has desirable properties and outperform the previously used methods in terms of both empirical power and Type I error. The method has been incorporated into the freely available R package BhGLM ( http://www.ssg.uab.edu/bhglm/ and http://github.com/abbyyan3/BhGLM ), providing a useful tool for analyzing microbiome data.
Prediction of in vivo background in phoswich lung count spectra
International Nuclear Information System (INIS)
Richards, N.W.
1999-01-01
Phoswich scintillation counters are used to detect actinides deposited in the lungs. The resulting spectra, however, contain Compton background from the decay of 40 K, which occurs naturally in the striated muscle tissue of the body. To determine the counts due to actinides in a lung count spectrum, the counts due to 40 K scatter must first be subtracted out. The 40 K background in the phoswich NaI(Tl) spectrum was predicted from an energy region of interest called the monitor region, which is above the 238 Pu region and the 241 Am region, where photopeaks from 238 Pu and 241 Am region, where photopeaks from 238 Pu and 241 Am occur. Empirical models were developed to predict the backgrounds in the 238 Pu and 241 Am regions by testing multiple linear and nonlinear regression models. The initial multiple regression models contain a monitor region variable as well as the variables gender, (weight/height) α , and interaction terms. Data were collected from 64 male and 63 female subjects with no internal exposure. For the 238 Pu region, the only significant predictor was found to be the monitor region. For the 241 Am region, the monitor region was found to have the greatest effect on prediction, while gender was significantly only when weight/height was included in a model. Gender-specific models were thus developed. The empirical models for the 241 Am region that contain weight/height were shown to have the best coefficients of determination (R 2 ) and the lowest mean squares for error (MSE)
Measuring worst-case errors in a robot workcell
International Nuclear Information System (INIS)
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.
1997-10-01
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot's model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Human errors in NPP operations
International Nuclear Information System (INIS)
Sheng Jufang
1993-01-01
Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested
Linear network error correction coding
Guang, Xuan
2014-01-01
There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an
Spent fuel bundle counter sequence error manual - RAPPS (200 MW) NGS
International Nuclear Information System (INIS)
Nicholson, L.E.
1992-01-01
The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message typically contains adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously
Spent fuel bundle counter sequence error manual - KANUPP (125 MW) NGS
International Nuclear Information System (INIS)
Nicholson, L.E.
1992-01-01
The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message may contain adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously
Droplet-counting Microtitration System for Precise On-site Analysis.
Kawakubo, Susumu; Omori, Taichi; Suzuki, Yasutada; Ueta, Ikuo
2018-01-01
A new microtitration system based on the counting of titrant droplets has been developed for precise on-site analysis. The dropping rate was controlled by inserting a capillary tube as a flow resistance in a laboratory-made micropipette. The error of titration was 3% in a simulated titration with 20 droplets. The pre-addition of a titrant was proposed for precise titration within an error of 0.5%. The analytical performances were evaluated for chelate titration, redox titration and acid-base titration.
Photon counting and fluctuation of molecular movement
International Nuclear Information System (INIS)
Inohara, Koichi
1978-01-01
The direct measurement of the fluctuation of molecular motions, which provides with useful information on the molecular movement, was conducted by introducing photon counting method. The utilization of photon counting makes it possible to treat the molecular system consisting of a small number of molecules like a radioisotope in the detection of a small number of atoms, which are significant in biological systems. This method is based on counting the number of photons of the definite polarization emitted in a definite time interval from the fluorescent molecules excited by pulsed light, which are bound to the marked large molecules found in a definite spatial region. Using the probability of finding a number of molecules oriented in a definite direction in the definite spatial region, the probability of counting a number of photons in a definite time interval can be calculated. Thus the measurable count rate of photons can be related with the fluctuation of molecular movement. The measurement was carried out under the condition, in which the probability of the simultaneous arrival of more than two photons at a detector is less than 1/100. As the experimental results, the resolving power of photon-counting apparatus, the frequency distribution of the number of photons of some definite polarization counted for 1 nanosecond are shown. In the solution, the variance of the number of molecules of 500 on the average is 1200, which was estimated from the experimental data by assuming normal distribution. This departure from the Poisson distribution means that a certain correlation does exist in molecular movement. In solid solution, no significant deviation was observed. The correlation existing in molecular movement can be expressed in terms of the fluctuation of the number of molecules. (Nakai, Y.)
Fluorescence decay data analysis correcting for detector pulse pile-up at very high count rates
Patting, Matthias; Reisch, Paja; Sackrow, Marcus; Dowler, Rhys; Koenig, Marcelle; Wahl, Michael
2018-03-01
Using time-correlated single photon counting for the purpose of fluorescence lifetime measurements is usually limited in speed due to pile-up. With modern instrumentation, this limitation can be lifted significantly, but some artifacts due to frequent merging of closely spaced detector pulses (detector pulse pile-up) remain an issue to be addressed. We propose a data analysis method correcting for this type of artifact and the resulting systematic errors. It physically models the photon losses due to detector pulse pile-up and incorporates the loss in the decay fit model employed to obtain fluorescence lifetimes and relative amplitudes of the decay components. Comparison of results with and without this correction shows a significant reduction of systematic errors at count rates approaching the excitation rate. This allows quantitatively accurate fluorescence lifetime imaging at very high frame rates.
Noun Countability; Count Nouns and Non-count Nouns, What are the Syntactic Differences Between them?
Directory of Open Access Journals (Sweden)
Azhar A. Alkazwini
2016-11-01
Full Text Available Words that function as the subjects of verbs, objects of verbs or prepositions and which can have a plural form and possessive ending are known as nouns. They are described as referring to persons, places, things, states, or qualities and might also be used as an attributive modifier. In this paper, classes and subclasses of nouns shall be presented, then, noun countability branching into count and non-count nous shall be discussed. A number of present examples illustrating differences between count and non-count nouns and this includes determiner-head-co-occurrence restrictions of number, subject-verb agreement, in addition to some exceptions to this agreement rule shall be discussed. Also, the lexically inherent number in nouns and how inherently plural nouns are classified in terms of (+/- count are illustrated. This research will discuss partitive construction of count and non-count nouns, nouns as attributive modifier and, finally, conclude with the fact that there are syntactic difference between count and non-count in the English Language.
Benjamin Thompson, Count Rumford Count Rumford on the nature of heat
Brown, Sanborn C
1967-01-01
Men of Physics: Benjamin Thompson - Count Rumford: Count Rumford on the Nature of Heat covers the significant contributions of Count Rumford in the fields of physics. Count Rumford was born with the name Benjamin Thompson on March 23, 1753, in Woburn, Massachusetts. This book is composed of two parts encompassing 11 chapters, and begins with a presentation of Benjamin Thompson's biography and his interest in physics, particularly as an advocate of an """"anti-caloric"""" theory of heat. The subsequent chapters are devoted to his many discoveries that profoundly affected the physical thought
Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.
Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F
2001-01-01
When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.
Error field considerations for BPX
International Nuclear Information System (INIS)
LaHaye, R.J.
1992-01-01
Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors
The uncorrected refractive error challenge
Directory of Open Access Journals (Sweden)
Kovin Naidoo
2016-11-01
Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.
Quantile Regression With Measurement Error
Wei, Ying
2009-08-27
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.
Comprehensive Error Rate Testing (CERT)
U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...
Numerical optimization with computational errors
Zaslavski, Alexander J
2016-01-01
This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...
Dual processing and diagnostic errors.
Norman, Geoff
2009-09-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.
Blood Count Tests: MedlinePlus Health Topic
... Spanish WBC count (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Blood Count Tests ... WBC count Show More Show Less Related Health Topics Bleeding Disorders Blood Laboratory Tests National Institutes of ...
Error analysis of the phase-shifting technique when applied to shadow moire
International Nuclear Information System (INIS)
Han, Changwoon; Han Bongtae
2006-01-01
An exact solution for the intensity distribution of shadow moire fringes produced by a broad spectrum light is presented. A mathematical study quantifies errors in fractional fringe orders determined by the phase-shifting technique, and its validity is corroborated experimentally. The errors vary cyclically as the distance between the reference grating and the specimen increases. The amplitude of the maximum error is approximately 0.017 fringe, which defines the theoretical limit of resolution enhancement offered by the phase-shifting technique
Schomaker, Michael; Hogger, Sara; Johnson, Leigh F; Hoffmann, Christopher J; Bärnighausen, Till; Heumann, Christian
2015-09-01
Both CD4 count and viral load in HIV-infected persons are measured with error. There is no clear guidance on how to deal with this measurement error in the presence of missing data. We used multiple overimputation, a method recently developed in the political sciences, to account for both measurement error and missing data in CD4 count and viral load measurements from four South African cohorts of a Southern African HIV cohort collaboration. Our knowledge about the measurement error of ln CD4 and log10 viral load is part of an imputation model that imputes both missing and mismeasured data. In an illustrative example, we estimate the association of CD4 count and viral load with the hazard of death among patients on highly active antiretroviral therapy by means of a Cox model. Simulation studies evaluate the extent to which multiple overimputation is able to reduce bias in survival analyses. Multiple overimputation emphasizes more strongly the influence of having high baseline CD4 counts compared to both a complete case analysis and multiple imputation (hazard ratio for >200 cells/mm vs. <25 cells/mm: 0.21 [95% confidence interval: 0.18, 0.24] vs. 0.38 [0.29, 0.48], and 0.29 [0.25, 0.34], respectively). Similar results are obtained when varying assumptions about measurement error, when using p-splines, and when evaluating time-updated CD4 count in a longitudinal analysis. The estimates of the association with viral load are slightly more attenuated when using multiple imputation instead of multiple overimputation. Our simulation studies suggest that multiple overimputation is able to reduce bias and mean squared error in survival analyses. Multiple overimputation, which can be used with existing software, offers a convenient approach to account for both missing and mismeasured data in HIV research.
Inherent Error in Asynchronous Digital Flight Controls.
1980-02-01
operation will be eliminated. If T* is close to T, the inherent error (eA) is a small value. Then the deficiency of the basic model, which is de... tK2 at m ~VCONTR0RKIPPIL I~+ R tKT 2 TKT 5 ~I G E 1 i V OTN TRIL 2-REDN HNE IjT e UT 3 ~49__I 4.) 4 -4 - 4.-4 U 4.) k-4E-- Iz E-4 P E-44)-4. 4.)l 1s...indicate the channel failure. To reduce this deficiency , the new model computes a tolerance value equal to the maximum steady-state sample covariance of the
Error correcting coding for OTN
DEFF Research Database (Denmark)
Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.
2010-01-01
Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....
Negligence, genuine error, and litigation
Sohn DH
2013-01-01
David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort syst...
Eliminating US hospital medical errors.
Kumar, Sameer; Steinebach, Marc
2008-01-01
Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.
Approximation errors during variance propagation
International Nuclear Information System (INIS)
Dinsmore, Stephen
1986-01-01
Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given
Identification of cotton properties to improve yarn count quality by using regression analysis
International Nuclear Information System (INIS)
Amin, M.; Ullah, M.; Akbar, A.
2014-01-01
Identification of raw material characteristics towards yarn count variation was studied by using statistical techniques. Regression analysis is used to meet the objective. Stepwise regression is used for mode) selection, and coefficient of determination and mean squared error (MSE) criteria are used to identify the contributing factors of cotton properties for yam count. Statistical assumptions of normality, autocorrelation and multicollinearity are evaluated by using probability plot, Durbin Watson test, variance inflation factor (VIF), and then model fitting is carried out. It is found that, invisible (INV), nepness (Nep), grayness (RD), cotton trash (TR) and uniformity index (VI) are the main contributing cotton properties for yarn count variation. The results are also verified by Pareto chart. (author)
Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin
2015-03-01
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
A constant velocity Moessbauer spectrometer free of long-term instrumental drifts in the count rate
International Nuclear Information System (INIS)
Sarma, P.R.; Sharma, A.K.; Tripathi, K.C.
1979-01-01
Two new control circuits to be used with a constant velocity Moessbauer spectrometer with a loud-speaker drive have been described. The wave-forms generated in the circuits are of the stair-case type instead of the usual square wave-form, so that in each oscillation of the source it remains stationary for a fraction of the time-period. The gamma-rays counted during this period are monitored along with the positive and negative velocity counts and are used to correct any fluctuation in the count rate by feeding these pulses into the timer. The associated logic circuits have been described and the statistical errors involved in the circuits have been computed. (auth.)
Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class
Novitasari, N.; Lukito, A.; Ekawati, R.
2018-01-01
A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.
DEFF Research Database (Denmark)
Sunde, Peter; Jessen, Lonnie
2013-01-01
observers with respect to their ability to detect and estimate distance to realistic animal silhouettes at different distances. Detection probabilities were higher for observers experienced in spotlighting mammals than for inexperienced observers, higher for observers with a hunting background compared...... with non-hunters and decreased as function of age but were independent of sex or educational background. If observer-specific detection probabilities were applied to real counting routes, point count estimates from inexperienced observers without a hunting background would only be 43 % (95 % CI, 39...
[Medical errors: inevitable but preventable].
Giard, R W
2001-10-27
Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.
Quantum error correction for beginners
International Nuclear Information System (INIS)
Devitt, Simon J; Nemoto, Kae; Munro, William J
2013-01-01
Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)
Hubbeling, Dieneke
2016-09-01
This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome.
Energy Technology Data Exchange (ETDEWEB)
Rozhdestvenskyy, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-08-09
This work iterates on the first demonstration of a solid-state neutron multiplicity counting system developed at Lawrence Livermore National Laboratory by using commercial off-the-shelf detectors. The system was demonstrated to determine the mass of a californium-252 neutron source within 20% error requiring only one-hour measurement time with 20 cm^{2} of active detector area.
Accommodating error analysis in comparison and clustering of molecular fingerprints.
Salamon, H; Segal, M R; Ponce de Leon, A; Small, P M
1998-01-01
Molecular epidemiologic studies of infectious diseases rely on pathogen genotype comparisons, which usually yield patterns comprising sets of DNA fragments (DNA fingerprints). We use a highly developed genotyping system, IS6110-based restriction fragment length polymorphism analysis of Mycobacterium tuberculosis, to develop a computational method that automates comparison of large numbers of fingerprints. Because error in fragment length measurements is proportional to fragment length and is positively correlated for fragments within a lane, an align-and-count method that compensates for relative scaling of lanes reliably counts matching fragments between lanes. Results of a two-step method we developed to cluster identical fingerprints agree closely with 5 years of computer-assisted visual matching among 1,335 M. tuberculosis fingerprints. Fully documented and validated methods of automated comparison and clustering will greatly expand the scope of molecular epidemiology.
Protecting count queries in study design.
Vinterbo, Staal A; Sarwate, Anand D; Boxwala, Aziz A
2012-01-01
Today's clinical research institutions provide tools for researchers to query their data warehouses for counts of patients. To protect patient privacy, counts are perturbed before reporting; this compromises their utility for increased privacy. The goal of this study is to extend current query answer systems to guarantee a quantifiable level of privacy and allow users to tailor perturbations to maximize the usefulness according to their needs. A perturbation mechanism was designed in which users are given options with respect to scale and direction of the perturbation. The mechanism translates the true count, user preferences, and a privacy level within administrator-specified bounds into a probability distribution from which the perturbed count is drawn. Users can significantly impact the scale and direction of the count perturbation and can receive more accurate final cohort estimates. Strong and semantically meaningful differential privacy is guaranteed, providing for a unified privacy accounting system that can support role-based trust levels. This study provides an open source web-enabled tool to investigate visually and numerically the interaction between system parameters, including required privacy level and user preference settings. Quantifying privacy allows system administrators to provide users with a privacy budget and to monitor its expenditure, enabling users to control the inevitable loss of utility. While current measures of privacy are conservative, this system can take advantage of future advances in privacy measurement. The system provides new ways of trading off privacy and utility that are not provided in current study design systems.
Delta count-rate monitoring system
International Nuclear Information System (INIS)
Van Etten, D.; Olsen, W.A.
1985-01-01
A need for a more effective way to rapidly search for gamma-ray contamination over large areas led to the design and construction of a very sensitive gamma detection system. The delta count-rate monitoring system was installed in a four-wheel-drive van instrumented for environmental surveillance and accident response. The system consists of four main sections: (1) two scintillation detectors, (2) high-voltage power supply amplifier and single-channel analyzer, (3) delta count-rate monitor, and (4) count-rate meter and recorder. The van's 6.5-kW generator powers the standard nuclear instrument modular design system. The two detectors are mounted in the rear corners of the van and can be run singly or jointly. A solid-state bar-graph count-rate meter mounted on the dashboard can be read easily by both the driver and passenger. A solid-state strip chart recorder shows trends and provides a permanent record of the data. An audible alarm is sounded at the delta monitor and at the dashboard count-rate meter if a detected radiation level exceeds the set background level by a predetermined amount
Photon-Counting Arrays for Time-Resolved Imaging
Directory of Open Access Journals (Sweden)
I. Michel Antolovic
2016-06-01
Full Text Available The paper presents a camera comprising 512 × 128 pixels capable of single-photon detection and gating with a maximum frame rate of 156 kfps. The photon capture is performed through a gated single-photon avalanche diode that generates a digital pulse upon photon detection and through a digital one-bit counter. Gray levels are obtained through multiple counting and accumulation, while time-resolved imaging is achieved through a 4-ns gating window controlled with subnanosecond accuracy by a field-programmable gate array. The sensor, which is equipped with microlenses to enhance its effective fill factor, was electro-optically characterized in terms of sensitivity and uniformity. Several examples of capture of fast events are shown to demonstrate the suitability of the approach.
Analysis of dental caries using generalized linear and count regression models
Directory of Open Access Journals (Sweden)
Javali M. Phil
2013-11-01
Full Text Available Generalized linear models (GLM are generalization of linear regression models, which allow fitting regression models to response data in all the sciences especially medical and dental sciences that follow a general exponential family. These are flexible and widely used class of such models that can accommodate response variables. Count data are frequently characterized by overdispersion and excess zeros. Zero-inflated count models provide a parsimonious yet powerful way to model this type of situation. Such models assume that the data are a mixture of two separate data generation processes: one generates only zeros, and the other is either a Poisson or a negative binomial data-generating process. Zero inflated count regression models such as the zero-inflated Poisson (ZIP, zero-inflated negative binomial (ZINB regression models have been used to handle dental caries count data with many zeros. We present an evaluation framework to the suitability of applying the GLM, Poisson, NB, ZIP and ZINB to dental caries data set where the count data may exhibit evidence of many zeros and over-dispersion. Estimation of the model parameters using the method of maximum likelihood is provided. Based on the Vuong test statistic and the goodness of fit measure for dental caries data, the NB and ZINB regression models perform better than other count regression models.