WorldWideScience

Sample records for maximum counting error

  1. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  2. Counting OCR errors in typeset text

    Science.gov (United States)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  3. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    Science.gov (United States)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  4. Peak-counts blood flow model-errors and limitations

    International Nuclear Information System (INIS)

    Mullani, N.A.; Marani, S.K.; Ekas, R.D.; Gould, K.L.

    1984-01-01

    The peak-counts model has several advantages, but its use may be limited due to the condition that the venous egress may not be negligible at the time of peak-counts. Consequently, blood flow measurements by the peak-counts model will depend on the bolus size, bolus duration, and the minimum transit time of the bolus through the region of interest. The effect of bolus size on the measurement of extraction fraction and blood flow was evaluated by injecting 1 to 30ml of rubidium chloride in the femoral vein of a dog and measuring the myocardial activity with a beta probe over the heart. Regional blood flow measurements were not found to vary with bolus sizes up to 30ml. The effect of bolus duration was studied by injecting a 10cc bolus of tracer at different speeds in the femoral vein of a dog. All intravenous injections undergo a broadening of the bolus duration due to the transit time of the tracer through the lungs and the heart. This transit time was found to range from 4-6 second FWHM and dominates the duration of the bolus to the myocardium for up to 3 second injections. A computer simulation has been carried out in which the different parameters of delay time, extraction fraction, and bolus duration can be changed to assess the errors in the peak-counts model. The results of the simulations show that the error will be greatest for short transit time delays and for low extraction fractions

  5. Errors associated with moose-hunter counts of occupied beaver Castor fiber lodges in Norway

    OpenAIRE

    Parker, Howard; Rosell, Frank; Gustavsen, Per Øyvind

    2002-01-01

    In Norway, Sweden and Finland moose Alces alces hunting teams are often employed to survey occupied beaver (Castor fiber and C. canadensis) lodges while hunting. Results may be used to estimate population density or trend, or for issuing harvest permits. Despite the method's increasing popularity, the errors involved have never been identified. In this study we 1) compare hunting-team counts of occupied lodges with total counts, 2) identify the sources of error between counts and 3) evaluate ...

  6. Correcting for particle counting bias error in turbulent flow

    Science.gov (United States)

    Edwards, R. V.; Baratuci, W.

    1985-01-01

    An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.

  7. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  8. Error in laboratory report data for platelet count assessment in patients suspicious for dengue: a note from observation

    Directory of Open Access Journals (Sweden)

    Somsri Wiwanitkit

    2016-08-01

    Full Text Available Dengue is a common tropical infection that is still a global health threat. An important laboratory parameter for the management of dengue is platelet count. Platelet count is an useful test for diagnosis and following up on dengue. However, errors in laboratory reports can occur. This study is a retrospective analysis on laboratory report data of complete blood count in cases with suspicious dengue in a medical center within 1 month period during the outbreak season on October, 2015. According to the studied period, there were 184 requests for complete blood count for cases suspected for dengue. From those 184 laboratory report records, errors can be seen in 12 reports (6.5%. This study demonstrates that there are considerable high rate of post-analytical errors in laboratory reports. Interestingly, the platelet count in those erroneous reports can be unreliable and ineffective or problematic when it is used for the management of dengue suspicious patients.

  9. Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels

    Science.gov (United States)

    Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.

    2018-01-01

    A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…

  10. TU-FG-209-03: Exploring the Maximum Count Rate Capabilities of Photon Counting Arrays Based On Polycrystalline Silicon

    Energy Technology Data Exchange (ETDEWEB)

    Liang, A K; Koniczek, M; Antonuk, L E; El-Mohri, Y; Zhao, Q [University of Michigan, Ann Arbor, MI (United States)

    2016-06-15

    Purpose: Photon counting arrays (PCAs) offer several advantages over conventional, fluence-integrating x-ray imagers, such as improved contrast by means of energy windowing. For that reason, we are exploring the feasibility and performance of PCA pixel circuitry based on polycrystalline silicon. This material, unlike the crystalline silicon commonly used in photon counting detectors, lends itself toward the economic manufacture of radiation tolerant, monolithic large area (e.g., ∼43×43 cm2) devices. In this presentation, exploration of maximum count rate, a critical performance parameter for such devices, is reported. Methods: Count rate performance for a variety of pixel circuit designs was explored through detailed circuit simulations over a wide range of parameters (including pixel pitch and operating conditions) with the additional goal of preserving good energy resolution. The count rate simulations assume input events corresponding to a 72 kVp x-ray spectrum with 20 mm Al filtration interacting with a CZT detector at various input flux rates. Output count rates are determined at various photon energy threshold levels, and the percentage of counts lost (e.g., due to deadtime or pile-up) is calculated from the ratio of output to input counts. The energy resolution simulations involve thermal and flicker noise originating from each circuit element in a design. Results: Circuit designs compatible with pixel pitches ranging from 250 to 1000 µm that allow count rates over a megacount per second per pixel appear feasible. Such rates are expected to be suitable for radiographic and fluoroscopic imaging. Results for the analog front-end circuitry of the pixels show that acceptable energy resolution can also be achieved. Conclusion: PCAs created using polycrystalline silicon have the potential to offer monolithic large-area detectors with count rate performance comparable to those of crystalline silicon detectors. Further improvement through detailed circuit

  11. The study of error for analysis in dynamic image from the error of count rates in Nal (Tl) scintillation camera

    International Nuclear Information System (INIS)

    Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam; Park, Hoon Hee

    2013-01-01

    This study is aimed to evaluate the effect of T 1/2 upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9 9m TcO 4 - of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ 2 test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T 1/2 error from change of gradient with -0.25% to +0.25%, if T 1/2 is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T 1/2 error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation measurement. Especially, we found a

  12. Range walk error correction and modeling on Pseudo-random photon counting system

    Science.gov (United States)

    Shen, Shanshan; Chen, Qian; He, Weiji

    2017-08-01

    Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.

  13. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    Science.gov (United States)

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  14. Determining random counts in liquid scintillation counting

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1979-01-01

    During measurements involving coincidence counting techniques, errors can arise due to the detection of chance or random coincidences in the multiple detectors used. A method and the electronic circuits necessary are here described for eliminating this source of error in liquid scintillation detectors used in coincidence counting. (UK)

  15. The estimation of differential counting measurements of possitive quantities with relatively large statistical errors

    International Nuclear Information System (INIS)

    Vincent, C.H.

    1982-01-01

    Bayes' principle is applied to the differential counting measurement of a positive quantity in which the statistical errors are not necessarily small in relation to the true value of the quantity. The methods of estimation derived are found to give consistent results and to avoid the anomalous negative estimates sometimes obtained by conventional methods. One of the methods given provides a simple means of deriving the required estimates from conventionally presented results and appears to have wide potential applications. Both methods provide the actual posterior probability distribution of the quantity to be measured. A particularly important potential application is the correction of counts on low radioacitvity samples for background. (orig.)

  16. Accurate measurement of peripheral blood mononuclear cell concentration using image cytometry to eliminate RBC-induced counting error.

    Science.gov (United States)

    Chan, Leo Li-Ying; Laverty, Daniel J; Smith, Tim; Nejad, Parham; Hei, Hillary; Gandhi, Roopali; Kuksin, Dmitry; Qiu, Jean

    2013-02-28

    Peripheral blood mononuclear cells (PBMCs) have been widely researched in the fields of immunology, infectious disease, oncology, transplantation, hematological malignancy, and vaccine development. Specifically, in immunology research, PBMCs have been utilized to monitor concentration, viability, proliferation, and cytokine production from immune cells, which are critical for both clinical trials and biomedical research. The viability and concentration of isolated PBMCs are traditionally measured by manual counting with trypan blue (TB) using a hemacytometer. One of the common issues of PBMC isolation is red blood cell (RBC) contamination. The RBC contamination can be dependent on the donor sample and/or technical skill level of the operator. RBC contamination in a PBMC sample can introduce error to the measured concentration, which can pass down to future experimental assays performed on these cells. To resolve this issue, RBC lysing protocol can be used to eliminate potential error caused by RBC contamination. In the recent years, a rapid fluorescence-based image cytometry system has been utilized for bright-field and fluorescence imaging analysis of cellular characteristics (Nexcelom Bioscience LLC, Lawrence, MA). The Cellometer image cytometry system has demonstrated the capability of automated concentration and viability detection in disposable counting chambers of unpurified mouse splenocytes and PBMCs stained with acridine orange (AO) and propidium iodide (PI) under fluorescence detection. In this work, we demonstrate the ability of Cellometer image cytometry system to accurately measure PBMC concentration, despite RBC contamination, by comparison of five different total PBMC counting methods: (1) manual counting of trypan blue-stained PBMCs in hemacytometer, (2) manual counting of PBMCs in bright-field images, (3) manual counting of acetic acid lysing of RBCs with TB-stained PBMCs, (4) automated counting of acetic acid lysing of RBCs with PI-stained PBMCs

  17. RCT: Module 2.03, Counting Errors and Statistics, Course 8768

    Energy Technology Data Exchange (ETDEWEB)

    Hillmer, Kurt T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-01

    Radiological sample analysis involves the observation of a random process that may or may not occur and an estimation of the amount of radioactive material present based on that observation. Across the country, radiological control personnel are using the activity measurements to make decisions that may affect the health and safety of workers at those facilities and their surrounding environments. This course will present an overview of measurement processes, a statistical evaluation of both measurements and equipment performance, and some actions to take to minimize the sources of error in count room operations. This course will prepare the student with the skills necessary for radiological control technician (RCT) qualification by passing quizzes, tests, and the RCT Comprehensive Phase 1, Unit 2 Examination (TEST 27566) and by providing in the field skills.

  18. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  19. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  20. A Trial-and-Error Method with Autonomous Vehicle-to-Infrastructure Traffic Counts for Cordon-Based Congestion Pricing

    Directory of Open Access Journals (Sweden)

    Zhiyuan Liu

    2017-01-01

    Full Text Available This study proposes a practical trial-and-error method to solve the optimal toll design problem of cordon-based pricing, where only the traffic counts autonomously collected on the entry links of the pricing cordon are needed. With the fast development and adoption of vehicle-to-infrastructure (V2I facilities, it is very convenient to autonomously collect these data. Two practical properties of the cordon-based pricing are further considered in this article: the toll charge on each entry of one pricing cordon is identical; the total inbound flow to one cordon should be restricted in order to maintain the traffic conditions within the cordon area. Then, the stochastic user equilibrium (SUE with asymmetric link travel time functions is used to assess each feasible toll pattern. Based on a variational inequality (VI model for the optimal toll pattern, this study proposes a theoretically convergent trial-and-error method for the addressed problem, where only traffic counts data are needed. Finally, the proposed method is verified based on a numerical network example.

  1. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs.

  2. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K. [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiments. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. 11 refs., 6 figs., 8 tabs. (Author)

  3. Radiation counting statistics

    International Nuclear Information System (INIS)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H.

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs

  4. A counting-card circuit based on PCI bus

    International Nuclear Information System (INIS)

    Shi Jing; Li Yong; Chinese Academy of Sciences, Lanzhou; Su Hong; Dong Chengfu; Li Xiaogang; Ma Xiaoli

    2004-01-01

    A counting-card circuit based on PCI bus that we developed recently used for advanced personal computer will be introduced in this paper briefly. The maximum count capacity of this counting-card is 10 9 -1, ranging from 0 to 999 999 999, the maximum counting time range, 1 x 10 6 s, can be set in 1 cycle, the maximum counting rate is 20 MHz for positive input. (authors)

  5. The effect of volume and quenching on estimation of counting efficiencies in liquid scintillation counting

    International Nuclear Information System (INIS)

    Knoche, H.W.; Parkhurst, A.M.; Tam, S.W.

    1979-01-01

    The effect of volume on the liquid scintillation counting performance of 14 C-samples has been investigated. A decrease in counting efficiency was observed for samples with volumes below about 6 ml and those above about 18 ml when unquenched samples were assayed. Two quench-correction methods, sample channels ratio and external standard channels ratio, and three different liquid scintillation counters, were used in an investigation to determine the magnitude of the error in predicting counting efficiencies when small volume samples (2 ml) with different levels of quenching were assayed. The 2 ml samples exhibited slightly greater standard deviations of the difference between predicted and determined counting efficiencies than did 15 ml samples. Nevertheless, the magnitude of the errors indicate that if the sample channels ratio method of quench correction is employed, 2 ml samples may be counted in conventional counting vials with little loss in counting precision. (author)

  6. Logical error rate scaling of the toric code

    International Nuclear Information System (INIS)

    Watson, Fern H E; Barrett, Sean D

    2014-01-01

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)

  7. Deep 3 GHz number counts from a P(D) fluctuation analysis

    Science.gov (United States)

    Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Fomalont, E. B.; Kellermann, K. I.; Miller, N.; Perley, R. A.

    2014-05-01

    Radio source counts constrain galaxy populations and evolution, as well as the global star formation history. However, there is considerable disagreement among the published 1.4-GHz source counts below 100 μJy. Here, we present a statistical method for estimating the μJy and even sub-μJy source count using new deep wide-band 3-GHz data in the Lockman Hole from the Karl G. Jansky Very Large Array. We analysed the confusion amplitude distribution P(D), which provides a fresh approach in the form of a more robust model, with a comprehensive error analysis. We tested this method on a large-scale simulation, incorporating clustering and finite source sizes. We discuss in detail our statistical methods for fitting using Markov chain Monte Carlo, handling correlations, and systematic errors from the use of wide-band radio interferometric data. We demonstrated that the source count can be constrained down to 50 nJy, a factor of 20 below the rms confusion. We found the differential source count near 10 μJy to have a slope of -1.7, decreasing to about -1.4 at fainter flux densities. At 3 GHz, the rms confusion in an 8-arcsec full width at half-maximum beam is ˜ 1.2 μJy beam-1, and a radio background temperature ˜14 mK. Our counts are broadly consistent with published evolutionary models. With these results, we were also able to constrain the peak of the Euclidean normalized differential source count of any possible new radio populations that would contribute to the cosmic radio background down to 50 nJy.

  8. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  10. BMRC: A Bitmap-Based Maximum Range Counting Approach for Temporal Data in Sensor Monitoring Networks

    Directory of Open Access Journals (Sweden)

    Bin Cao

    2017-09-01

    Full Text Available Due to the rapid development of the Internet of Things (IoT, many feasible deployments of sensor monitoring networks have been made to capture the events in physical world, such as human diseases, weather disasters and traffic accidents, which generate large-scale temporal data. Generally, the certain time interval that results in the highest incidence of a severe event has significance for society. For example, there exists an interval that covers the maximum number of people who have the same unusual symptoms, and knowing this interval can help doctors to locate the reason behind this phenomenon. As far as we know, there is no approach available for solving this problem efficiently. In this paper, we propose the Bitmap-based Maximum Range Counting (BMRC approach for temporal data generated in sensor monitoring networks. Since sensor nodes can update their temporal data at high frequency, we present a scalable strategy to support the real-time insert and delete operations. The experimental results show that the BMRC outperforms the baseline algorithm in terms of efficiency.

  11. Correction for decay during counting in gamma spectrometry

    International Nuclear Information System (INIS)

    Nir-El, Y.

    2013-01-01

    A basic result in gamma spectrometry is the count rate of a relevant peak. Correction for decay during counting and expressing the count rate at the beginning of the measurement can be done by a multiplicative factor that is derived from integrating the count rate over time. The counting time substituted in this factor must be the live time, whereas the use of the real-time is an error that underestimates the count rate by about the dead-time (DT) (in percentage). This error of underestimation of the count rate is corroborated in the measurement of a nuclide with a high DT. The present methodology is not applicable in systems that include a zero DT correction function. (authors)

  12. Effect of error in crack length measurement on maximum load fracture toughness of Zr-2.5Nb pressure tube material

    International Nuclear Information System (INIS)

    Bind, A.K.; Sunil, Saurav; Singh, R.N.; Chakravartty, J.K.

    2016-03-01

    Recently it was found that maximum load toughness (J max ) for Zr-2.5Nb pressure tube material was practically unaffected by error in Δ a . To check the sensitivity of the J max to error in Δ a measurement, the J max was calculated assuming no crack growth up to the maximum load (P max ) for as received and hydrogen charged Zr-2.5Nb pressure tube material. For load up to the P max , the J values calculated assuming no crack growth (J NC ) were slightly higher than that calculated based on Δ a measured using DCPD technique (JDCPD). In general, error in the J calculation found to be increased exponentially with Δ a . The error in J max calculation was increased with an increase in Δ a and a decrease in J max . Based on deformation theory of J, an analytic criterion was developed to check the insensitivity of the J max to error in Δ a . There was very good linear relation was found between the J max calculated based on Δ a measured using DCPD technique and the J max calculated assuming no crack growth. This relation will be very useful to calculate J max without measuring the crack growth during fracture test especially for irradiated material. (author)

  13. Road safety performance measures and AADT uncertainty from short-term counts.

    Science.gov (United States)

    Milligan, Craig; Montufar, Jeannette; Regehr, Jonathan; Ghanney, Bartholomew

    2016-12-01

    The objective of this paper is to enable better risk analysis of road safety performance measures by creating the first knowledge base on uncertainty surrounding annual average daily traffic (AADT) estimates when the estimates are derived by expanding short-term counts with the individual permanent counter method. Many road safety performance measures and performance models use AADT as an input. While there is an awareness that the input suffers from uncertainty, the uncertainty is not well known or accounted for. The paper samples data from a set of 69 permanent automatic traffic recorders in Manitoba, Canada, to simulate almost 2 million short-term counts over a five year period. These short-term counts are expanded to AADT estimates by transferring temporal information from a directly linked nearby permanent count control station, and the resulting AADT values are compared to a known reference AADT to compute errors. The impacts of five factors on AADT error are considered: length of short-term count, number of short-term counts, use of weekday versus weekend counts, distance from a count to its expansion control station, and the AADT at the count site. The mean absolute transfer error for expanded AADT estimates is 6.7%, and this value varied by traffic pattern group from 5% to 10.5%. Reference percentiles of the error distribution show that almost all errors are between -20% and +30%. Error decreases substantially by using a 48-h count instead of a 24-h count, and only slightly by using two counts instead of one. Weekday counts are superior to weekend counts, especially if the count is only 24h. Mean absolute transfer error increases with distance to control station (elasticity 0.121, p=0.001), and increases with AADT (elasticity 0.857, proad safety performance measures that use AADT as inputs. Analytical frameworks for such analysis exist but are infrequently used in road safety because the evidence base on AADT uncertainty is not well developed. Copyright

  14. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: I. general description

    Energy Technology Data Exchange (ETDEWEB)

    Kaganovich, Igor D.; Massidda, Scottt; Startsev, Edward A.; Davidson, Ronald C.; Vay, Jean-Luc; Friedman, Alex

    2012-06-21

    Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the

  15. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  16. TasselNet: counting maize tassels in the wild via local counts regression network.

    Science.gov (United States)

    Lu, Hao; Cao, Zhiguo; Xiao, Yang; Zhuang, Bohan; Shen, Chunhua

    2017-01-01

    performance, with a mean absolute error of 6.6 and a mean squared error of 9.6 averaged over 8 test sequences. TasselNet can achieve robust in-field counting of maize tassels with a relatively high degree of accuracy. Our experimental evaluations also suggest several good practices for practitioners working on maize-tassel-like counting problems. It is worth noting that, though the counting errors have been greatly reduced by TasselNet, in-field counting of maize tassels remains an open and unsolved problem.

  17. TasselNet: counting maize tassels in the wild via local counts regression network

    Directory of Open Access Journals (Sweden)

    Hao Lu

    2017-11-01

    margins and achieves the overall best counting performance, with a mean absolute error of 6.6 and a mean squared error of 9.6 averaged over 8 test sequences. Conclusions TasselNet can achieve robust in-field counting of maize tassels with a relatively high degree of accuracy. Our experimental evaluations also suggest several good practices for practitioners working on maize-tassel-like counting problems. It is worth noting that, though the counting errors have been greatly reduced by TasselNet, in-field counting of maize tassels remains an open and unsolved problem.

  18. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  19. Compton suppression gamma-counting: The effect of count rate

    Science.gov (United States)

    Millard, H.T.

    1984-01-01

    Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.

  20. Low-priced, time-saving, reliable and stable LR-115 counting system

    International Nuclear Information System (INIS)

    Tchorz-Trzeciakiewicz, D.E.

    2015-01-01

    Nuclear alpha particles leave etches (tracks) when they hit the surface of a LR-115 detector. The density of these tracks is used to measure radon concentration. Counting these tracks by human sense is tedious and time-consuming procedure and may introduce counting error, whereas most available automatic and semiautomatic counting systems are expensive or complex. An uncomplicated, robust, reliable and stable counting system using freely available on the Internet software as Digimizer™ and PhotoScape was developed and proposed. The effectiveness of the proposed procedure was evaluated by comparing the amount of tracks counted by software with the amount of tracks counted manually for 223 detectors. The percentage error for each analysed detector was obtained as a difference between automatic and manual counts divided by manual count. For more than 97% of detectors, the percentage errors oscillated between −3% and 3%. - Highlights: • Semiautomatic, uncomplicated procedure was proposed to count the amount of alpha tracks. • Freely available software on the Internet used as alpha tracks counting system for LR-115. • LR-115 detectors used to measure radon concentration and radon exhalation rate

  1. Accuracy and precision in activation analysis: counting

    International Nuclear Information System (INIS)

    Becker, D.A.

    1974-01-01

    Accuracy and precision in activation analysis was investigated with regard to counting of induced radioactivity. The various parameters discussed include configuration, positioning, density, homogeneity, intensity, radioisotopic purity, peak integration, and nuclear constants. Experimental results are presented for many of these parameters. The results obtained indicate that counting errors often contribute significantly to the inaccuracy and imprecision of analyses. The magnitude of these errors range from less than 1 percent to 10 percent or more in many cases

  2. Sources and magnitude of sampling error in redd counts for bull trout

    Science.gov (United States)

    Jason B. Dunham; Bruce Rieman

    2001-01-01

    Monitoring of salmonid populations often involves annual redd counts, but the validity of this method has seldom been evaluated. We conducted redd counts of bull trout Salvelinus confluentus in two streams in northern Idaho to address four issues: (1) relationships between adult escapements and redd counts; (2) interobserver variability in redd...

  3. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  4. Tests for detecting overdispersion in models with measurement error in covariates.

    Science.gov (United States)

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Maximum error-bounded Piecewise Linear Representation for online stream approximation

    KAUST Repository

    Xie, Qing; Pang, Chaoyi; Zhou, Xiaofang; Zhang, Xiangliang; Deng, Ke

    2014-01-01

    Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.

  6. Maximum error-bounded Piecewise Linear Representation for online stream approximation

    KAUST Repository

    Xie, Qing

    2014-04-04

    Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.

  7. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  8. The effect of event shape on centroiding in photon counting detectors

    International Nuclear Information System (INIS)

    Kawakami, Hajime; Bone, David; Fordham, John; Michel, Raul

    1994-01-01

    High resolution, CCD readout, photon counting detectors employ simple centroiding algorithms for defining the spatial position of each detected event. The accuracy of centroiding is very dependent upon a number of parameters including the profile, energy and width of the intensified event. In this paper, we provide an analysis of how the characteristics of an intensified event change as the input count rate increases and the consequent effect on centroiding. The changes in these parameters are applied in particular to the MIC photon counting detector developed at UCL for ground and space based astronomical applications. This detector has a maximum format of 3072x2304 pixels permitting its use in the highest resolution applications. Individual events, at light level from 5 to 1000k events/s over the detector area, were analysed. It was found that both the asymmetry and width of event profiles were strongly dependent upon the energy of the intensified event. The variation in profile then affected the centroiding accuracy leading to loss of resolution. These inaccuracies have been quantified for two different 3 CCD pixel centroiding algorithms and one 2 pixel algorithm. The results show that a maximum error of less than 0.05 CCD pixel occurs with the 3 pixel algorithms and 0.1 CCD pixel for the 2 pixel algorithm. An improvement is proposed by utilising straight pore MCPs in the intensifier and a 70 μm air gap in front of the CCD. ((orig.))

  9. Box-counting dimension revisited: presenting an efficient method of minimising quantisation error and an assessment of the self-similarity of structural root systems

    Directory of Open Access Journals (Sweden)

    Martin eBouda

    2016-02-01

    Full Text Available Fractal dimension (FD, estimated by box-counting, is a metric used to characterise plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantisation error (QE, which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterise the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitised in 3D and subjected to box-counts. A pattern search algorithm was used to minimise QE by optimising grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates.QE due to both grid position and orientation was a significant source of error in FD estimates, but pattern search provided an efficient means of minimising it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitisations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did

  10. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Problems and precision of the alpha scintillation radon counting system

    International Nuclear Information System (INIS)

    Lucas, H.F.; Markuu, F.

    1985-01-01

    Variations in efficiency as large as 3% have been found for radon scintillation counting systems in which the photomultiplier tubes are sensitive to the thermoluminescent photons emitted by the scintillator after exposure to light or for which the resolution has deteriorated. The additional standard deviation caused by counting a radon chamber on multiple counting systems has been evaluated and the effect, if present, did not exceed about 0.1%. The chambers have been calibrated for the measurement of radon in air, and the standard deviation was equal to statistical counting error combined with a systematic error of 1.1%. 3 references, 2 figures, 2 tables

  12. A new stratification of mourning dove call-count routes

    Science.gov (United States)

    Blankenship, L.H.; Humphrey, A.B.; MacDonald, D.

    1971-01-01

    The mourning dove (Zenaidura macroura) call-count survey is a nationwide audio-census of breeding mourning doves. Recent analyses of the call-count routes have utilized a stratification based upon physiographic regions of the United States. An analysis of 5 years of call-count data, based upon stratification using potential natural vegetation, has demonstrated that this uew stratification results in strata with greater homogeneity than the physiographic strata, provides lower error variance, and hence generates greatet precision in the analysis without an increase in call-count routes. Error variance was reduced approximately 30 percent for the contiguous United States. This indicates that future analysis based upon the new stratification will result in an increased ability to detect significant year-to-year changes.

  13. Application of neutron multiplicity counting to waste assay

    Energy Technology Data Exchange (ETDEWEB)

    Pickrell, M.M.; Ensslin, N. [Los Alamos National Lab., NM (United States); Sharpe, T.J. [North Carolina State Univ., Raleigh, NC (United States)

    1997-11-01

    This paper describes the use of a new figure of merit code that calculates both bias and precision for coincidence and multiplicity counting, and determines the optimum regions for each in waste assay applications. A {open_quotes}tunable multiplicity{close_quotes} approach is developed that uses a combination of coincidence and multiplicity counting to minimize the total assay error. An example is shown where multiplicity analysis is used to solve for mass, alpha, and multiplication and tunable multiplicity is shown to work well. The approach provides a method for selecting coincidence, multiplicity, or tunable multiplicity counting to give the best assay with the lowest total error over a broad spectrum of assay conditions. 9 refs., 6 figs.

  14. Accuracy in activation analysis: count rate effects

    International Nuclear Information System (INIS)

    Lindstrom, R.M.; Fleming, R.F.

    1980-01-01

    The accuracy inherent in activation analysis is ultimately limited by the uncertainty of counting statistics. When careful attention is paid to detail, several workers have shown that all systematic errors can be reduced to an insignificant fraction of the total uncertainty, even when the statistical limit is well below one percent. A matter of particular importance is the reduction of errors due to high counting rate. The loss of counts due to random coincidence (pulse pileup) in the amplifier and to digitization time in the ADC may be treated as a series combination of extending and non-extending dead times, respectively. The two effects are experimentally distinct. Live timer circuits in commercial multi-channel analyzers compensate properly for ADC dead time for long-lived sources, but not for pileup. Several satisfactory solutions are available, including pileup rejection and dead time correction circuits, loss-free ADCs, and computed corrections in a calibrated system. These methods are sufficiently reliable and well understood that a decaying source can be measured routinely with acceptably small errors at a dead time as high as 20 percent

  15. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    Science.gov (United States)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  16. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    Science.gov (United States)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  17. Total lymphocyte count as a substitute to cd4 count in management of hiv infected individuals in resource limited society

    International Nuclear Information System (INIS)

    Daud, M.Y.; Qazi, R.A.

    2015-01-01

    Pakistan is a resource limited society and gold standard parameters to monitor HIV disease activity are very costly. The objective of the study was to evaluate total lymphocyte count (TLC) as a surrogate to CD4 count to monitor disease activity in HIV/AIDS in resource limited society. Methods: This cross sectional study was carried out at HIV/AIDS treatment centre, Pakistan Institute of Medical Sciences (PIMS), Islamabad. A total of seven hundred and seventy four (774) HIV positive patients were enrolled in this study, and their CD4 count and total lymphocyte count were checked to find any correlation between the two by using Spearman ranked correlation coefficient. Results: The mean CD4 count was (434.30 ± 269.23), with minimum CD4 count of (9.00), and maximum of (1974.00). The mean total lymphocyte count (TLC) was (6764.0052 ± 2364.02) with minimum TLC (1200.00) and maximum TLC was (20200.00). Using the Pearson's correlation (r) there was a significant and positive correlation between TLC and CD4 count. (r2=0.127 and p=0.000) at 0.01 level. Conclusion: Our study showed a significant positive correlation between CD4 count and total lymphocyte count (TLC), so TLC can be used as a marker of disease activity in HIV infected patients. (author)

  18. A burst-mode photon counting receiver with automatic channel estimation and bit rate detection

    Science.gov (United States)

    Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.

    2016-04-01

    We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.

  19. Regional compensation for statistical maximum likelihood reconstruction error of PET image pixels

    International Nuclear Information System (INIS)

    Forma, J; Ruotsalainen, U; Niemi, J A

    2013-01-01

    In positron emission tomography (PET), there is an increasing interest in studying not only the regional mean tracer concentration, but its variation arising from local differences in physiology, the tissue heterogeneity. However, in reconstructed images this physiological variation is shadowed by a large reconstruction error, which is caused by noisy data and the inversion of tomographic problem. We present a new procedure which can quantify the error variation in regional reconstructed values for given PET measurement, and reveal the remaining tissue heterogeneity. The error quantification is made by creating and reconstructing the noise realizations of virtual sinograms, which are statistically similar with the measured sinogram. Tests with physical phantom data show that the characterization of error variation and the true heterogeneity are possible, despite the existing model error when real measurement is considered. (paper)

  20. Automation of Sample Transfer and Counting on Fast Neutron ActivationSystem

    International Nuclear Information System (INIS)

    Dewita; Budi-Santoso; Darsono

    2000-01-01

    The automation of sample transfer and counting were the transfer processof the sample to the activation and counting place which have been done byswitch (manually) previously, than being developed by automaticallyprogrammed logic instructions. The development was done by constructed theelectronics hardware and software for that communication. Transfer timemeasurement is on seconds and was done automatically with an error 1.6 ms.The counting and activation time were decided by the user on seconds andminutes, the execution error on minutes was 8.2 ms. This development systemwill be possible for measuring short half live elements and cyclic activationprocesses. (author)

  1. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    Science.gov (United States)

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  2. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    Science.gov (United States)

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  3. A New Method for Calculating Counts in Cells

    Science.gov (United States)

    Szapudi, István

    1998-04-01

    In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.

  4. Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.

    Science.gov (United States)

    Klumpp, John; Brandl, Alexander

    2015-03-01

    A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.

  5. Implementation of a nonlinear filter for online nuclear counting

    International Nuclear Information System (INIS)

    Coulon, R.; Dumazert, J.; Kondrasovs, V.; Normand, S.

    2016-01-01

    Nuclear counting is a challenging task for nuclear instrumentation because of the stochastic nature of radioactivity. Event counting has to be processed and filtered to determine a stable count rate value and perform variation monitoring of the measured event. An innovative approach for nuclear counting is presented in this study, improving response time and maintaining count rate stability. Some nonlinear filters providing a local maximum likelihood estimation of the signal have been recently developed, which have been tested and compared with conventional linear filters. A nonlinear filter thus developed shows significant performance in terms of response time and measurement precision. The filter also presents the specificity of easy embedment into digital signal processor (DSP) electronics based on field-programmable gate arrays (FPGA) or microcontrollers, compatible with real-time requirements. © 2001 Elsevier Science. All rights reserved. - Highlights: • An efficient approach based on nonlinear filtering has been implemented. • The hypothesis test provides a local maximum likelihood estimation of the count rate. • The filter ensures an optimal compromise between precision and response time.

  6. Regression Models For Multivariate Count Data.

    Science.gov (United States)

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2017-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.

  7. Quality control methods in accelerometer data processing: identifying extreme counts.

    Directory of Open Access Journals (Sweden)

    Carly Rich

    Full Text Available Accelerometers are designed to measure plausible human activity, however extremely high count values (EHCV have been recorded in large-scale studies. Using population data, we develop methodological principles for establishing an EHCV threshold, propose a threshold to define EHCV in the ActiGraph GT1M, determine occurrences of EHCV in a large-scale study, identify device-specific error values, and investigate the influence of varying EHCV thresholds on daily vigorous PA (VPA.We estimated quantiles to analyse the distribution of all accelerometer positive count values obtained from 9005 seven-year old children participating in the UK Millennium Cohort Study. A threshold to identify EHCV was derived by differentiating the quantile function. Data were screened for device-specific error count values and EHCV, and a sensitivity analysis conducted to compare daily VPA estimates using three approaches to accounting for EHCV.Using our proposed threshold of ≥ 11,715 counts/minute to identify EHCV, we found that only 0.7% of all non-zero counts measured in MCS children were EHCV; in 99.7% of these children, EHCV comprised < 1% of total non-zero counts. Only 11 MCS children (0.12% of sample returned accelerometers that contained negative counts; out of 237 such values, 211 counts were equal to -32,768 in one child. The medians of daily minutes spent in VPA obtained without excluding EHCV, and when using a higher threshold (≥19,442 counts/minute were, respectively, 6.2% and 4.6% higher than when using our threshold (6.5 minutes; p<0.0001.Quality control processes should be undertaken during accelerometer fieldwork and prior to analysing data to identify monitors recording error values and EHCV. The proposed threshold will improve the validity of VPA estimates in children's studies using the ActiGraph GT1M by ensuring only plausible data are analysed. These methods can be applied to define appropriate EHCV thresholds for different accelerometer models.

  8. Practical application of the theory of errors in measurement

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the practical application of the theory of errors in measurement. The topics of the chapter include fixing on a maximum desired error, selecting a maximum error, the procedure for limiting the error, utilizing a standard procedure, setting specifications for a standard procedure, and selecting the number of measurements to be made

  9. Influence of the statistical distribution of bioassay measurement errors on the intake estimation

    International Nuclear Information System (INIS)

    Lee, T. Y; Kim, J. K

    2006-01-01

    The purpose of this study is to provide the guidance necessary for making a selection of error distributions by analyzing influence of statistical distribution for a type of bioassay measurement error on the intake estimation. For this purpose, intakes were estimated using maximum likelihood method for cases that error distributions are normal and lognormal, and comparisons between two distributions for the estimated intakes were made. According to the results of this study, in case that measurement results for lung retention are somewhat greater than the limit of detection it appeared that distribution types have negligible influence on the results. Whereas in case of measurement results for the daily excretion rate, the results obtained from assumption of a lognormal distribution were 10% higher than those obtained from assumption of a normal distribution. In view of these facts, in case where uncertainty component is governed by counting statistics it is considered that distribution type have no influence on intake estimation. Whereas in case where the others are predominant, it is concluded that it is clearly desirable to estimate the intake assuming a lognormal distribution

  10. Error Characterization and Mitigation for 16Nm MLC NAND Flash Memory Under Total Ionizing Dose Effect

    Science.gov (United States)

    Li, Yue (Inventor); Bruck, Jehoshua (Inventor)

    2018-01-01

    A data device includes a memory having a plurality of memory cells configured to store data values in accordance with a predetermined rank modulation scheme that is optional and a memory controller that receives a current error count from an error decoder of the data device for one or more data operations of the flash memory device and selects an operating mode for data scrubbing in accordance with the received error count and a program cycles count.

  11. Evaluating remotely sensed plant count accuracy with differing unmanned aircraft system altitudes, physical canopy separations, and ground covers

    Science.gov (United States)

    Leiva, Josue Nahun; Robbins, James; Saraswat, Dharmendra; She, Ying; Ehsani, Reza

    2017-07-01

    This study evaluated the effect of flight altitude and canopy separation of container-grown Fire Chief™ arborvitae (Thuja occidentalis L.) on counting accuracy. Images were taken at 6, 12, and 22 m above the ground using unmanned aircraft systems. Plants were spaced to achieve three canopy separation treatments: 5 cm between canopy edges, canopy edges touching, and 5 cm of canopy edge overlap. Plants were placed on two different ground covers: black fabric and gravel. A counting algorithm was trained using Feature Analyst®. Total counting error, false positives, and unidentified plants were reported for images analyzed. In general, total counting error was smaller when plants were fully separated. The effect of ground cover on counting accuracy varied with the counting algorithm. Total counting error for plants placed on gravel (-8) was larger than for those on a black fabric (-2), however, false positive counts were similar for black fabric (6) and gravel (6). Nevertheless, output images of plants placed on gravel did not show a negative effect due to the ground cover but was impacted by differences in image spatial resolution.

  12. Every photon counts: improving low, mid, and high-spatial frequency errors on astronomical optics and materials with MRF

    Science.gov (United States)

    Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul

    2016-07-01

    Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF

  13. Development of a stained cell nuclei counting system

    Science.gov (United States)

    Timilsina, Niranjan; Moffatt, Christopher; Okada, Kazunori

    2011-03-01

    This paper presents a novel cell counting system which exploits the Fast Radial Symmetry Transformation (FRST) algorithm [1]. The driving force behind our system is a research on neurogenesis in the intact nervous system of Manduca Sexta or the Tobacco Hornworm, which was being studied to assess the impact of age, food and environment on neurogenesis. The varying thickness of the intact nervous system in this species often yields images with inhomogeneous background and inconsistencies such as varying illumination, variable contrast, and irregular cell size. For automated counting, such inhomogeneity and inconsistencies must be addressed, which no existing work has done successfully. Thus, our goal is to devise a new cell counting algorithm for the images with non-uniform background. Our solution adapts FRST: a computer vision algorithm which is designed to detect points of interest on circular regions such as human eyes. This algorithm enhances the occurrences of the stained-cell nuclei in 2D digital images and negates the problems caused by their inhomogeneity. Besides FRST, our algorithm employs standard image processing methods, such as mathematical morphology and connected component analysis. We have evaluated the developed cell counting system with fourteen digital images of Tobacco Hornworm's nervous system collected for this study with ground-truth cell counts by biology experts. Experimental results show that our system has a minimum error of 1.41% and mean error of 16.68% which is at least forty-four percent better than the algorithm without FRST.

  14. A study of the effect of measurement error in predictor variables in nondestructive assay

    International Nuclear Information System (INIS)

    Burr, Tom L.; Knepper, Paula L.

    2000-01-01

    It is not widely known that ordinary least squares estimates exhibit bias if there are errors in the predictor variables. For example, enrichment measurements are often fit to two predictors: Poisson-distributed count rates in the region of interest and in the background. Both count rates have at least random variation due to counting statistics. Therefore, the parameter estimates will be biased. In this case, the effect of bias is a minor issue because there is almost no interest in the parameters themselves. Instead, the parameters will be used to convert count rates into estimated enrichment. In other cases, this bias source is potentially more important. For example, in tomographic gamma scanning, there is an emission stage which depends on predictors (the 'system matrix') that are estimated with error during the transmission stage. In this paper, we provide background information for the impact and treatment of errors in predictors, present results of candidate methods of compensating for the effect, review some of the nondestructive assay situations where errors in predictors occurs, and provide guidance for when errors in predictors should be considered in nondestructive assay

  15. Maximum Power Point Tracking in Variable Speed Wind Turbine Based on Permanent Magnet Synchronous Generator Using Maximum Torque Sliding Mode Control Strategy

    Institute of Scientific and Technical Information of China (English)

    Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh

    2017-01-01

    The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).

  16. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  17. Relationship of blood and milk cell counts with mastitic pathogens in Murrah buffaloes

    Directory of Open Access Journals (Sweden)

    C. Singh

    2010-02-01

    Full Text Available The present study was undertaken to see the effect of mastitic pathogens on the blood and milk counts of Murrah buffaloes. Milk and blood samples were collected from 9 mastitic Murrah buffaloes. The total leucocyte Counts (TLC and Differential leucocyte counts (DLC in blood were within normal range and there was a non-significant change in blood counts irrespective of different mastitic pathogens. Normal milk quarter samples had significantly (P<0.01 less Somatic cell counts (SCC. Lymphocytes were significantly higher in normal milk samples, whereas infected samples had a significant increase (P<0.01 in milk neutrophils. S. aureus infected buffaloes had maximum milk SCC, followed by E. coli and S. agalactiae. Influx of neutrophils in the buffalo mammary gland was maximum for S. agalactiae, followed by E.cli and S. aureus. The study indicated that level of mastitis had no affect on blood counts but it influenced the milk SCC of normal quarters.

  18. Eutectic cell and nodule count as the quality factors of cast iron

    Directory of Open Access Journals (Sweden)

    E. Fraś

    2008-10-01

    Full Text Available In this work the predictions based on a theoretical analysis aimed at elucidating of eutectic cell count or nodule counts N wereexperimentally verified. The experimental work was focused on processing flake graphite and ductile iron under various inoculationconditions in order to achieve various physicochemical states of the experimental melts. In addition, plates of various wall thicknesses, s were cast and the resultant eutectic cell or nodule counts were established. Moreover, thermal analysis was used to find out the degree of maximum undercooling for the graphite eutectic, Tm. A relationship was found between the eutectic cell or nodule count and the maximum undercooling Tm.. In addition it was also found that N can be related to the wall thickness of plate shaped castings. Finally, the present work provides a rational for the effect of technological factors such as the melt chemistry, inoculation practice, and holding temperature and time on the resultant cell count or nodule count of cast iron. In particular, good agreement was found between the predictions of the theoretical analysis and the experimental data.

  19. ELLIPTICAL WEIGHTED HOLICs FOR WEAK LENSING SHEAR MEASUREMENT. III. THE EFFECT OF RANDOM COUNT NOISE ON IMAGE MOMENTS IN WEAK LENSING ANALYSIS

    International Nuclear Information System (INIS)

    Okura, Yuki; Futamase, Toshifumi

    2013-01-01

    This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging, but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of ν ∼ 11.7.

  20. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  1. Design Study of an Incinerator Ash Conveyor Counting System - 13323

    International Nuclear Information System (INIS)

    Jaederstroem, Henrik; Bronson, Frazier

    2013-01-01

    A design study has been performed for a system that should measure the Cs-137 activity in ash from an incinerator. Radioactive ash, expected to consist of both Cs-134 and Cs-137, will be transported on a conveyor belt at 0.1 m/s. The objective of the counting system is to determine the Cs-137 activity and direct the ash to the correct stream after a diverter. The decision levels are ranging from 8000 to 400000 Bq/kg and the decision error should be as low as possible. The decision error depends on the total measurement uncertainty which depends on the counting statistics and the uncertainty in the efficiency of the geometry. For the low activity decision it is necessary to know the efficiency to be able to determine if the signal from the Cs-137 is above the minimum detectable activity and that it generates enough counts to reach the desired precision. For the higher activity decision the uncertainty of the efficiency needs to be understood to minimize decision errors. The total efficiency of the detector is needed to be able to determine if the detector will be able operate at the count rate at the highest expected activity. The design study that is presented in this paper describes how the objectives of the monitoring systems were obtained, the choice of detector was made and how ISOCS (In Situ Object Counting System) mathematical modeling was used to calculate the efficiency. The ISOCS uncertainty estimator (IUE) was used to determine which parameters of the ash was important to know accurately in order to minimize the uncertainty of the efficiency. The examined parameters include the height of the ash on the conveyor belt, the matrix composition and density and relative efficiency of the detector. (authors)

  2. Design Study of an Incinerator Ash Conveyor Counting System - 13323

    Energy Technology Data Exchange (ETDEWEB)

    Jaederstroem, Henrik; Bronson, Frazier [Canberra Industries Inc., 800 Research Parkway Meriden CT 06450 (United States)

    2013-07-01

    A design study has been performed for a system that should measure the Cs-137 activity in ash from an incinerator. Radioactive ash, expected to consist of both Cs-134 and Cs-137, will be transported on a conveyor belt at 0.1 m/s. The objective of the counting system is to determine the Cs-137 activity and direct the ash to the correct stream after a diverter. The decision levels are ranging from 8000 to 400000 Bq/kg and the decision error should be as low as possible. The decision error depends on the total measurement uncertainty which depends on the counting statistics and the uncertainty in the efficiency of the geometry. For the low activity decision it is necessary to know the efficiency to be able to determine if the signal from the Cs-137 is above the minimum detectable activity and that it generates enough counts to reach the desired precision. For the higher activity decision the uncertainty of the efficiency needs to be understood to minimize decision errors. The total efficiency of the detector is needed to be able to determine if the detector will be able operate at the count rate at the highest expected activity. The design study that is presented in this paper describes how the objectives of the monitoring systems were obtained, the choice of detector was made and how ISOCS (In Situ Object Counting System) mathematical modeling was used to calculate the efficiency. The ISOCS uncertainty estimator (IUE) was used to determine which parameters of the ash was important to know accurately in order to minimize the uncertainty of the efficiency. The examined parameters include the height of the ash on the conveyor belt, the matrix composition and density and relative efficiency of the detector. (authors)

  3. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy.

    Science.gov (United States)

    Cohen, E A K; Ober, R J

    2013-12-15

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data.

  4. Atmospheric mold spore counts in relation to meteorological parameters

    Science.gov (United States)

    Katial, R. K.; Zhang, Yiming; Jones, Richard H.; Dyer, Philip D.

    Fungal spore counts of Cladosporium, Alternaria, and Epicoccum were studied during 8 years in Denver, Colorado. Fungal spore counts were obtained daily during the pollinating season by a Rotorod sampler. Weather data were obtained from the National Climatic Data Center. Daily averages of temperature, relative humidity, daily precipitation, barometric pressure, and wind speed were studied. A time series analysis was performed on the data to mathematically model the spore counts in relation to weather parameters. Using SAS PROC ARIMA software, a regression analysis was performed, regressing the spore counts on the weather variables assuming an autoregressive moving average (ARMA) error structure. Cladosporium was found to be positively correlated (Pmodel was derived for Cladosporium spore counts using the annual seasonal cycle and significant weather variables. The model for Alternaria and Epicoccum incorporated the annual seasonal cycle. Fungal spore counts can be modeled by time series analysis and related to meteorological parameters controlling for seasonallity; this modeling can provide estimates of exposure to fungal aeroallergens.

  5. Maximum Entropy and Theory Construction: A Reply to Favretti

    Directory of Open Access Journals (Sweden)

    John Harte

    2018-04-01

    Full Text Available In the maximum entropy theory of ecology (METE, the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy model exists that assumes the same prior knowledge and makes predictions that differ from METE’s. He shows that both cannot be correct and asserts that his is the correct one because it can be derived from a classic microstate-counting calculation. I clarify here exactly what the core entities and definitions are for METE, and discuss the relevance of two critical issues raised by Favretti: the existence of a counting procedure for microstates and the choices of definition of the core elements of a theory. I emphasize that a theorist controls how the core entities of his or her theory are defined, and that nature is the final arbiter of the validity of a theory.

  6. Automated vehicle counting using image processing and machine learning

    Science.gov (United States)

    Meany, Sean; Eskew, Edward; Martinez-Castro, Rosana; Jang, Shinae

    2017-04-01

    Vehicle counting is used by the government to improve roadways and the flow of traffic, and by private businesses for purposes such as determining the value of locating a new store in an area. A vehicle count can be performed manually or automatically. Manual counting requires an individual to be on-site and tally the traffic electronically or by hand. However, this can lead to miscounts due to factors such as human error A common form of automatic counting involves pneumatic tubes, but pneumatic tubes disrupt traffic during installation and removal, and can be damaged by passing vehicles. Vehicle counting can also be performed via the use of a camera at the count site recording video of the traffic, with counting being performed manually post-recording or using automatic algorithms. This paper presents a low-cost procedure to perform automatic vehicle counting using remote video cameras with an automatic counting algorithm. The procedure would utilize a Raspberry Pi micro-computer to detect when a car is in a lane, and generate an accurate count of vehicle movements. The method utilized in this paper would use background subtraction to process the images and a machine learning algorithm to provide the count. This method avoids fatigue issues that are encountered in manual video counting and prevents the disruption of roadways that occurs when installing pneumatic tubes

  7. Simultaneous Treatment of Missing Data and Measurement Error in HIV Research Using Multiple Overimputation.

    Science.gov (United States)

    Schomaker, Michael; Hogger, Sara; Johnson, Leigh F; Hoffmann, Christopher J; Bärnighausen, Till; Heumann, Christian

    2015-09-01

    Both CD4 count and viral load in HIV-infected persons are measured with error. There is no clear guidance on how to deal with this measurement error in the presence of missing data. We used multiple overimputation, a method recently developed in the political sciences, to account for both measurement error and missing data in CD4 count and viral load measurements from four South African cohorts of a Southern African HIV cohort collaboration. Our knowledge about the measurement error of ln CD4 and log10 viral load is part of an imputation model that imputes both missing and mismeasured data. In an illustrative example, we estimate the association of CD4 count and viral load with the hazard of death among patients on highly active antiretroviral therapy by means of a Cox model. Simulation studies evaluate the extent to which multiple overimputation is able to reduce bias in survival analyses. Multiple overimputation emphasizes more strongly the influence of having high baseline CD4 counts compared to both a complete case analysis and multiple imputation (hazard ratio for >200 cells/mm vs. <25 cells/mm: 0.21 [95% confidence interval: 0.18, 0.24] vs. 0.38 [0.29, 0.48], and 0.29 [0.25, 0.34], respectively). Similar results are obtained when varying assumptions about measurement error, when using p-splines, and when evaluating time-updated CD4 count in a longitudinal analysis. The estimates of the association with viral load are slightly more attenuated when using multiple imputation instead of multiple overimputation. Our simulation studies suggest that multiple overimputation is able to reduce bias and mean squared error in survival analyses. Multiple overimputation, which can be used with existing software, offers a convenient approach to account for both missing and mismeasured data in HIV research.

  8. Limit of sensitivity of low-background counting equipment

    International Nuclear Information System (INIS)

    Homann, S.G.

    1991-01-01

    The Hazards Control Department's Radiological Measurements Laboratory (RML) analyzes many types of sample media in support of the Laboratory's health and safety program. The Department has determined that the equation for the minimum limit of sensitivity, MDC(α,β) = 2.71 + 3.29 (r b t s ) 1/2 is also adequate for RML counting systems with very-low-background levels. This paper reviews the normal distribution case and address the special case of determining the limit of sensitivity of a counting system when the background count rate is well known and small. In the latter case, we must use an exact test procedure based on the binomial distribution. However, the error in using the normal distribution for calculating a detection system's limit of sensitivity is not significant even as the total observed number of counts approaches or equals zero. 2 refs., 4 figs

  9. Validation of an automated colony counting system for group A Streptococcus.

    Science.gov (United States)

    Frost, H R; Tsoi, S K; Baker, C A; Laho, D; Sanderson-Smith, M L; Steer, A C; Smeesters, P R

    2016-02-08

    The practice of counting bacterial colony forming units on agar plates has long been used as a method to estimate the concentration of live bacteria in culture. However, due to the laborious and potentially error prone nature of this measurement technique, an alternative method is desirable. Recent technologic advancements have facilitated the development of automated colony counting systems, which reduce errors introduced during the manual counting process and recording of information. An additional benefit is the significant reduction in time taken to analyse colony counting data. Whilst automated counting procedures have been validated for a number of microorganisms, the process has not been successful for all bacteria due to the requirement for a relatively high contrast between bacterial colonies and growth medium. The purpose of this study was to validate an automated counting system for use with group A Streptococcus (GAS). Twenty-one different GAS strains, representative of major emm-types, were selected for assessment. In order to introduce the required contrast for automated counting, 2,3,5-triphenyl-2H-tetrazolium chloride (TTC) dye was added to Todd-Hewitt broth with yeast extract (THY) agar. Growth on THY agar with TTC was compared with growth on blood agar and THY agar to ensure the dye was not detrimental to bacterial growth. Automated colony counts using a ProtoCOL 3 instrument were compared with manual counting to confirm accuracy over the stages of the growth cycle (latent, mid-log and stationary phases) and in a number of different assays. The average percentage differences between plating and counting methods were analysed using the Bland-Altman method. A percentage difference of ±10 % was determined as the cut-off for a critical difference between plating and counting methods. All strains measured had an average difference of less than 10 % when plated on THY agar with TTC. This consistency was also observed over all phases of the growth

  10. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  11. Measures to handle accidental contamination of persons with standard counting devices available in a department of nuclear medicine

    International Nuclear Information System (INIS)

    Aiginger, H.; Lauer, D.; Unfried, E.; Koenig, F.; Ogris, E.

    1998-01-01

    The assets and shortcomings of a well-type NaI detector and a Ge detector were examined using the Marinelli geometry, test tube geometry, and beaker geometry. Plots of the efficiency vs. energy (efficiency calibration), recorded time vs. true time (dead-time effects), and maximum activity vs. energy are reproduced. A high counting efficiency is typical of the scintillation detector in the well for the test tube geometry, particularly in the low energy range. For energies higher than 100 keV, the counting efficiency decreases because of the increasing penetration of the detector bulk by high-energy photons. For the germanium detector, the highest counting efficiency was achieved in the beaker geometry. A linear relationship exists between the calculated and measured counts at the beginning of the recorded curve for both systems. For the well-type detector the maximum detectable count rate was about 30 kcps, the linearity of the plot of the true count rate was guaranteed up to 10 kcps. Dead time correction was to be made at higher count rates. For the germanium detector the maximum detectable count rate was only about 8 kcps due to the longer dead time, the linear segment, however, was longer than for the scintillation detector. It is concluded that although the maximum detectable count rate of the germanium detector is lower, higher true activities can be detected with it owing to the lower detection efficiency. The well-type scintillation detector is advantageous for the test tube geometry. (P.A.)

  12. Estimation of equivalent dose and its uncertainty in the OSL SAR protocol when count numbers do not follow a Poisson distribution

    International Nuclear Information System (INIS)

    Bluszcz, Andrzej; Adamiec, Grzegorz; Heer, Aleksandra J.

    2015-01-01

    The current work focuses on the estimation of equivalent dose and its uncertainty using the single aliquot regenerative protocol in optically stimulated luminescence measurements. The authors show that the count numbers recorded with the use of photomultiplier tubes are well described by negative binomial distributions, different ones for background counts and photon induced counts. This fact is then exploited in pseudo-random count number generation and simulations of D e determination assuming a saturating exponential growth. A least squares fitting procedure is applied using different types of weights to determine whether the obtained D e 's and their error estimates are unbiased and accurate. A weighting procedure is suggested that leads to almost unbiased D e estimates. It is also shown that the assumption of Poisson distribution in D e estimation may lead to severe underestimation of the D e error. - Highlights: • Detailed analysis of statistics of count numbers in luminescence readers. • Generation of realistically scattered pseudo-random numbers of counts in luminescence measurements. • A practical guide for stringent analysis of D e values and errors assessment.

  13. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  14. Droplet-counting Microtitration System for Precise On-site Analysis.

    Science.gov (United States)

    Kawakubo, Susumu; Omori, Taichi; Suzuki, Yasutada; Ueta, Ikuo

    2018-01-01

    A new microtitration system based on the counting of titrant droplets has been developed for precise on-site analysis. The dropping rate was controlled by inserting a capillary tube as a flow resistance in a laboratory-made micropipette. The error of titration was 3% in a simulated titration with 20 droplets. The pre-addition of a titrant was proposed for precise titration within an error of 0.5%. The analytical performances were evaluated for chelate titration, redox titration and acid-base titration.

  15. Preverbal and verbal counting and computation.

    Science.gov (United States)

    Gallistel, C R; Gelman, R

    1992-08-01

    We describe the preverbal system of counting and arithmetic reasoning revealed by experiments on numerical representations in animals. In this system, numerosities are represented by magnitudes, which are rapidly but inaccurately generated by the Meck and Church (1983) preverbal counting mechanism. We suggest the following. (1) The preverbal counting mechanism is the source of the implicit principles that guide the acquisition of verbal counting. (2) The preverbal system of arithmetic computation provides the framework for the assimilation of the verbal system. (3) Learning to count involves, in part, learning a mapping from the preverbal numerical magnitudes to the verbal and written number symbols and the inverse mappings from these symbols to the preverbal magnitudes. (4) Subitizing is the use of the preverbal counting process and the mapping from the resulting magnitudes to number words in order to generate rapidly the number words for small numerosities. (5) The retrieval of the number facts, which plays a central role in verbal computation, is mediated via the inverse mappings from verbal and written numbers to the preverbal magnitudes and the use of these magnitudes to find the appropriate cells in tabular arrangements of the answers. (6) This model of the fact retrieval process accounts for the salient features of the reaction time differences and error patterns revealed by experiments on mental arithmetic. (7) The application of verbal and written computational algorithms goes on in parallel with, and is to some extent guided by, preverbal computations, both in the child and in the adult.

  16. EFFECT OF MEASUREMENT ERRORS ON PREDICTED COSMOLOGICAL CONSTRAINTS FROM SHEAR PEAK STATISTICS WITH LARGE SYNOPTIC SURVEY TELESCOPE

    Energy Technology Data Exchange (ETDEWEB)

    Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S. [KIPAC, Stanford University, 452 Lomita Mall, Stanford, CA 94309 (United States); Kratochvil, J. M.; Huffenberger, K. M. [Department of Physics, University of Miami, Coral Gables, FL 33124 (United States); May, M. [Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Haiman, Z.; Jernigan, J. G., E-mail: djbard@slac.stanford.edu [Department of Astronomy and Astrophysics, Columbia University, New York, NY 10027 (United States); and others

    2013-09-01

    We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.

  17. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  18. Error and corrections with scintigraphic measurement of gastric emptying of solid foods

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, J.H.; Van Deventer, G.; Graham, L.S.; Thomson, J.; Thomasson, D.

    1983-03-01

    Previous methods for correction of depth used geometric means of simultaneously obtained anterior and posterior counts. The present study compares this method with a new one that uses computations of depth based on peak-to-scatter (P:S) ratios. Six normal volunteers were fed a meal of beef stew, water, and chicken liver that had been labeled in vivo with both In-113m and Tc-99m. Gastric emptying was followed at short intervals with anterior counts of peak and scattered radiation for each nuclide, as well as posteriorly collected peak counts from the gastric ROI. Depth of the nuclides was estimated by the P:S method as well as the older method. Both gave similar results. Errors from septal penetration or scatter proved to be a significantly larger problem than errors from changes in depth.

  19. Relationship of milking rate to somatic cell count.

    Science.gov (United States)

    Brown, C A; Rischette, S J; Schultz, L H

    1986-03-01

    Information on milking rate, monthly bucket somatic cell counts, mastitis treatment, and milk production was obtained from 284 lactations of Holstein cows separated into three lactation groups. Significant correlations between somatic cell count (linear score) and other parameters included production in lactation 1 (-.185), production in lactation 2 (-.267), and percent 2-min milk in lactation 2 (.251). Somatic cell count tended to increase with maximum milking rate in all lactations, but correlations were not statistically significant. Twenty-nine percent of cows with milking rate measurements were treated for clinical mastitis. Treated cows in each lactation group produced less milk than untreated cows. In the second and third lactation groups, treated cows had a shorter total milking time and a higher percent 2-min milk than untreated cows, but differences were not statistically significant. Overall, the data support the concept that faster milking cows tend to have higher cell counts and more mastitis treatments, particularly beyond first lactation. However, the magnitude of the relationship was small.

  20. Practical, Reliable Error Bars in Quantum Tomography

    OpenAIRE

    Faist, Philippe; Renner, Renato

    2015-01-01

    Precise characterization of quantum devices is usually achieved with quantum tomography. However, most methods which are currently widely used in experiments, such as maximum likelihood estimation, lack a well-justified error analysis. Promising recent methods based on confidence regions are difficult to apply in practice or yield error bars which are unnecessarily large. Here, we propose a practical yet robust method for obtaining error bars. We do so by introducing a novel representation of...

  1. Maximum entropy reconstructions for crystallographic imaging; Cristallographie et reconstruction d`images par maximum d`entropie

    Energy Technology Data Exchange (ETDEWEB)

    Papoular, R

    1997-07-01

    The Fourier Transform is of central importance to Crystallography since it allows the visualization in real space of tridimensional scattering densities pertaining to physical systems from diffraction data (powder or single-crystal diffraction, using x-rays, neutrons, electrons or else). In turn, this visualization makes it possible to model and parametrize these systems, the crystal structures of which are eventually refined by Least-Squares techniques (e.g., the Rietveld method in the case of Powder Diffraction). The Maximum Entropy Method (sometimes called MEM or MaxEnt) is a general imaging technique, related to solving ill-conditioned inverse problems. It is ideally suited for tackling undetermined systems of linear questions (for which the number of variables is much larger than the number of equations). It is already being applied successfully in Astronomy, Radioastronomy and Medical Imaging. The advantages of using MAXIMUM Entropy over conventional Fourier and `difference Fourier` syntheses stem from the following facts: MaxEnt takes the experimental error bars into account; MaxEnt incorporate Prior Knowledge (e.g., the positivity of the scattering density in some instances); MaxEnt allows density reconstructions from incompletely phased data, as well as from overlapping Bragg reflections; MaxEnt substantially reduces truncation errors to which conventional experimental Fourier reconstructions are usually prone. The principles of Maximum Entropy imaging as applied to Crystallography are first presented. The method is then illustrated by a detailed example specific to Neutron Diffraction: the search for proton in solids. (author). 17 refs.

  2. Monitoring Milk Somatic Cell Counts

    Directory of Open Access Journals (Sweden)

    Gheorghe Şteţca

    2014-11-01

    Full Text Available The presence of somatic cells in milk is a widely disputed issue in milk production sector. The somatic cell counts in raw milk are a marker for the specific cow diseases such as mastitis or swollen udder. The high level of somatic cells causes physical and chemical changes to milk composition and nutritional value, and as well to milk products. Also, the mastitic milk is not proper for human consumption due to its contribution to spreading of certain diseases and food poisoning. According to these effects, EU Regulations established the maximum threshold of admitted somatic cells in raw milk to 400000 cells / mL starting with 2014. The purpose of this study was carried out in order to examine the raw milk samples provided from small farms, industrial type farms and milk processing units. There are several ways to count somatic cells in milk but the reference accepted method is the microscopic method described by the SR EN ISO 13366-1/2008. Generally samples registered values in accordance with the admissible limit. By periodical monitoring of the somatic cell count, certain technological process issues are being avoided and consumer’s health ensured.

  3. Monte Carlo simulation of lung counting efficiency using a whole-body counter at a nuclear power plant

    International Nuclear Information System (INIS)

    Dongming, L.; Shuhai, J.; Houwen, L.

    2016-01-01

    In order to routinely evaluate workers' internal exposure due to intake of radionuclides, a whole-body counter (WBC) at the Third Qinshan Nuclear Power Co. Ltd. (TQNPC) is used. Counting would typically occur immediately after a confirmed or suspected inhalation exposure. The counting geometry would differ as a result of the height of the individual being counted, which would result in over- or underestimated intake(s). In this study, Monte Carlo simulation was applied to evaluate the counting efficiency when performing a lung count using the WBC at the TQNPC. In order to validate the simulated efficiencies for lung counting, the WBC was benchmarked for various lung positions using a 137 Cs source. The results show that the simulated efficiencies are fairly consistent with the measured ones for 137 Cs, with a relative error of 0.289%. For a lung organ simulation, the discrepancy between the calibration phantom and the Chinese reference adult person (170 cm) was within 6% for peak energies ranging from 59.5 keV to 2000 keV. The relative errors vary from 4.63% to 8.41% depending on the person's height and photon energy. Therefore, the simulation technique is effective and practical for lung counting, which is difficult to calibrate using a physical phantom. (authors)

  4. Maximum likelihood versus likelihood-free quantum system identification in the atom maser

    International Nuclear Information System (INIS)

    Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin

    2014-01-01

    We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle. (paper)

  5. An corrective method to correct of the inherent flaw of the asynchronization direct counting circuit

    International Nuclear Information System (INIS)

    Wang Renfei; Liu Congzhan; Jin Yongjie; Zhang Zhi; Li Yanguo

    2003-01-01

    As a inherent flaw of the Asynchronization Direct Counting Circuit, the crosstalk, which is resulted from the randomicity of the time-signal always exists between two adjacent channels. In order to reduce the counting error derived from the crosstalk, the author propose an effective method to correct the flaw after analysing the mechanism of the crosstalk

  6. Atom-counting in High Resolution Electron Microscopy:TEM or STEM - That's the question.

    Science.gov (United States)

    Gonnissen, J; De Backer, A; den Dekker, A J; Sijbers, J; Van Aert, S

    2017-03-01

    In this work, a recently developed quantitative approach based on the principles of detection theory is used in order to determine the possibilities and limitations of High Resolution Scanning Transmission Electron Microscopy (HR STEM) and HR TEM for atom-counting. So far, HR STEM has been shown to be an appropriate imaging mode to count the number of atoms in a projected atomic column. Recently, it has been demonstrated that HR TEM, when using negative spherical aberration imaging, is suitable for atom-counting as well. The capabilities of both imaging techniques are investigated and compared using the probability of error as a criterion. It is shown that for the same incoming electron dose, HR STEM outperforms HR TEM under common practice standards, i.e. when the decision is based on the probability function of the peak intensities in HR TEM and of the scattering cross-sections in HR STEM. If the atom-counting decision is based on the joint probability function of the image pixel values, the dependence of all image pixel intensities as a function of thickness should be known accurately. Under this assumption, the probability of error may decrease significantly for atom-counting in HR TEM and may, in theory, become lower as compared to HR STEM under the predicted optimal experimental settings. However, the commonly used standard for atom-counting in HR STEM leads to a high performance and has been shown to work in practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. An Economical Fast Discriminator for Nuclear Pulse Counting

    International Nuclear Information System (INIS)

    Issarachai, Opas; Punnachaiya, Suvit

    2009-07-01

    Full text: This research work was aimed to develop a fast discriminator at low cost but high capability for discrimination a nanosecond nuclear pulse. The fast discriminator can be used in association with fast photon counting system. The designed structure consisted of the ultra-fast voltage comparator using ADCMP601 integrated circuit, the monostable multivibrator with controllable pulse width output by propagation delay of logic gate, and the fast response buffer amplifier. The tested results of pulse height discrimination of 0-5 V nuclear pulse with 20 ns (FWHM) pulse width showed the correlation coefficient (R 2 ) between discrimination level and pulse height was 0.998, while the pulse rate more than 10 MHz could be counted. The 30 ns logic pulse width output revealed high stable and could be smoothly driven to low impedance load at 50 Ω. For pulse signal transmission to the counter, it was also found that the termination of reflected signal must be considered because it may cause pulse counting error

  8. Some target assay uncertainties for passive neutron coincidence counting

    International Nuclear Information System (INIS)

    Ensslin, N.; Langner, D.G.; Menlove, H.O.; Miller, M.C.; Russo, P.A.

    1990-01-01

    This paper provides some target assay uncertainties for passive neutron coincidence counting of plutonium metal, oxide, mixed oxide, and scrap and waste. The target values are based in part on past user experience and in part on the estimated results from new coincidence counting techniques that are under development. The paper summarizes assay error sources and the new coincidence techniques, and recommends the technique that is likely to yield the lowest assay uncertainty for a given material type. These target assay uncertainties are intended to be useful for NDA instrument selection and assay variance propagation studies for both new and existing facilities. 14 refs., 3 tabs

  9. Standardization of Ga-68 by coincidence measurements, liquid scintillation counting and 4πγ counting.

    Science.gov (United States)

    Roteta, Miguel; Peyres, Virginia; Rodríguez Barquero, Leonor; García-Toraño, Eduardo; Arenillas, Pablo; Balpardo, Christian; Rodrígues, Darío; Llovera, Roberto

    2012-09-01

    The radionuclide (68)Ga is one of the few positron emitters that can be prepared in-house without the use of a cyclotron. It disintegrates to the ground state of (68)Zn partially by positron emission (89.1%) with a maximum energy of 1899.1 keV, and partially by electron capture (10.9%). This nuclide has been standardized in the frame of a cooperation project between the Radionuclide Metrology laboratories from CIEMAT (Spain) and CNEA (Argentina). Measurements involved several techniques: 4πβ-γ coincidences, integral gamma counting and Liquid Scintillation Counting using the triple to double coincidence ratio and the CIEMAT/NIST methods. Given the short half-life of the radionuclide assayed, a direct comparison between results from both laboratories was excluded and a comparison of experimental efficiencies of similar NaI detectors was used instead. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Estimating the standard deviation for 222Rn scintillation counting - a note concerning the paper by Sarmiento et al

    International Nuclear Information System (INIS)

    Key, R.M.

    1977-01-01

    In a recent report Sarmiento et al.(1976) presented a method for estimating the statistical error associated with 222 Rn scintillation counting. Because of certain approximations, the method is less accurate than that of an earlier work by Lucas and Woodward (1964). The Sarmiento method and the Lucas method are compared, and the magnitude of errors incurred using the approximations are determined. For counting times greater than 300 minutes, the disadvantage of the slight inaccuracies of the Sarmiento method are outweighed by the advantage of easier calculation. (Auth.)

  11. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    International Nuclear Information System (INIS)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by χ 2 -minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. They compare this method with a χ 2 -minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than ∼20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers

  12. Comparison of approximate formulas for decision levels and detection limits for paired counting with the exact results

    International Nuclear Information System (INIS)

    Potter, W.E.

    2005-01-01

    The exact probability density function for paired counting can be expressed in terms of modified Bessel functions of integral order when the expected blank count is known. Exact decision levels and detection limits can be computed in a straightforward manner. For many applications perturbing half-integer corrections to Gaussian distributions yields satisfactory results for decision levels. When there is concern about the uncertainty for the expected value of the blank count, a way to bound the errors of both types using confidence intervals for the expected blank count is discussed. (author)

  13. Dependence of the compensation error on the error of a sensor and corrector in an adaptive optics phase-conjugating system

    International Nuclear Information System (INIS)

    Kiyko, V V; Kislov, V I; Ofitserov, E N

    2015-01-01

    In the framework of a statistical model of an adaptive optics system (AOS) of phase conjugation, three algorithms based on an integrated mathematical approach are considered, each of them intended for minimisation of one of the following characteristics: the sensor error (in the case of an ideal corrector), the corrector error (in the case of ideal measurements) and the compensation error (with regard to discreteness and measurement noises and to incompleteness of a system of response functions of the corrector actuators). Functional and statistical relationships between the algorithms are studied and a relation is derived to ensure calculation of the mean-square compensation error as a function of the errors of the sensor and corrector with an accuracy better than 10%. Because in adjusting the AOS parameters, it is reasonable to proceed from the equality of the sensor and corrector errors, in the case the Hartmann sensor is used as a wavefront sensor, the required number of actuators in the absence of the noise component in the sensor error turns out 1.5 – 2.5 times less than the number of counts, and that difference grows with increasing measurement noise. (adaptive optics)

  14. Dependence of the compensation error on the error of a sensor and corrector in an adaptive optics phase-conjugating system

    Energy Technology Data Exchange (ETDEWEB)

    Kiyko, V V; Kislov, V I; Ofitserov, E N [A M Prokhorov General Physics Institute, Russian Academy of Sciences, Moscow (Russian Federation)

    2015-08-31

    In the framework of a statistical model of an adaptive optics system (AOS) of phase conjugation, three algorithms based on an integrated mathematical approach are considered, each of them intended for minimisation of one of the following characteristics: the sensor error (in the case of an ideal corrector), the corrector error (in the case of ideal measurements) and the compensation error (with regard to discreteness and measurement noises and to incompleteness of a system of response functions of the corrector actuators). Functional and statistical relationships between the algorithms are studied and a relation is derived to ensure calculation of the mean-square compensation error as a function of the errors of the sensor and corrector with an accuracy better than 10%. Because in adjusting the AOS parameters, it is reasonable to proceed from the equality of the sensor and corrector errors, in the case the Hartmann sensor is used as a wavefront sensor, the required number of actuators in the absence of the noise component in the sensor error turns out 1.5 – 2.5 times less than the number of counts, and that difference grows with increasing measurement noise. (adaptive optics)

  15. Development of an automated asbestos counting software based on fluorescence microscopy.

    Science.gov (United States)

    Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio

    2015-01-01

    An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.

  16. A pulse stacking method of particle counting applied to position sensitive detection

    International Nuclear Information System (INIS)

    Basilier, E.

    1976-03-01

    A position sensitive particle counting system is described. A cyclic readout imaging device serves as an intermediate information buffer. Pulses are allowed to stack in the imager at very high counting rates. Imager noise is completely discriminated to provide very wide dynamic range. The system has been applied to a detector using cascaded microchannel plates. Pulse height spread produced by the plates causes some loss of information. The loss is comparable to the input loss of the plates. The improvement in maximum counting rate is several hundred times over previous systems that do not permit pulse stacking. (Auth.)

  17. Spent fuel bundle counter sequence error manual - BRUCE NGS

    International Nuclear Information System (INIS)

    Nicholson, L.E.

    1992-01-01

    The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message typically contains adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously

  18. Spent fuel bundle counter sequence error manual - DARLINGTON NGS

    International Nuclear Information System (INIS)

    Nicholson, L.E.

    1992-01-01

    The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message typically contains adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously

  19. Reduction of weighing errors caused by tritium decay heating

    International Nuclear Information System (INIS)

    Shaw, J.F.

    1978-01-01

    The deuterium-tritium source gas mixture for laser targets is formulated by weight. Experiments show that the maximum weighing error caused by tritium decay heating is 0.2% for a 104-cm 3 mix vessel. Air cooling the vessel reduces the weighing error by 90%

  20. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Purpose: Pixel size is an important parameter of gamma camera and SPECT. A number of Methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source(PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (Xm) is an approximation of the true count-centroid (Xp) of the PS, i.e. Xm=Xp+(Xb-Xp)/(1+Rp/Rb), where Rp is the net counting rate of the PS, Xb the background count-centroid and Rb the background counting rate. To get accurate measurement, Rp must be very big, which is unpractical, resulting in the variation of measured pixel size. Rp-independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (Xb-Xp)/(1+Rp/Rb) by bringing Xb closer to Xp and by reducing Rb. In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=I-(0.5)D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent Xp. The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128*128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10cps to 1183cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01( mean= 3.01±0.00) as Rp increased

  1. Software design of automatic counting system for nuclear track based on mathematical morphology algorithm

    International Nuclear Information System (INIS)

    Pan Yi; Mao Wanchong

    2010-01-01

    The parameter measurement of nuclear track occupies an important position in the field of nuclear technology. However, traditional artificial counting method has many limitations. In recent years, DSP and digital image processing technology have been applied in nuclear field more and more. For the sake of reducing errors of visual measurement in artificial counting method, an automatic counting system for nuclear track based on DM642 real-time image processing platform is introduced in this article, which is able to effectively remove interferences from the background and noise points, as well as automatically extract nuclear track-points by using mathematical morphology algorithm. (authors)

  2. Correction of the counting up number by dead time in detector systems for radiograph images

    International Nuclear Information System (INIS)

    Cerdeira E, A.; Cicuttin, A.; Cerdeira, A.; Estrada, M.; Luca, A. de

    2002-01-01

    The effect of the dead time in a detection system by counting up of particles and the contribution of this error in the final image resolution is analysed. It is given a statistical criteria for the optimization of electronic parameters such as dead time and counting up memory which help in the implementation of these systems with the minimum necessary characteristics which satisfy the resolution requirements. (Author)

  3. Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik

    2017-01-01

    The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...

  4. 'Intelligent' approach to radioimmunoassay sample counting employing a microprocessor controlled sample counter

    International Nuclear Information System (INIS)

    Ekins, R.P.; Sufi, S.; Malan, P.G.

    1977-01-01

    The enormous impact on medical science in the last two decades of microanalytical techniques employing radioisotopic labels has, in turn, generated a large demand for automatic radioisotopic sample counters. Such instruments frequently comprise the most important item of capital equipment required in the use of radioimmunoassay and related techniques and often form a principle bottleneck in the flow of samples through a busy laboratory. It is therefore particularly imperitive that such instruments should be used 'intelligently' and in an optimal fashion to avoid both the very large capital expenditure involved in the unnecessary proliferation of instruments and the time delays arising from their sub-optimal use. The majority of the current generation of radioactive sample counters nevertheless rely on primitive control mechanisms based on a simplistic statistical theory of radioactive sample counting which preclude their efficient and rational use. The fundamental principle upon which this approach is based is that it is useless to continue counting a radioactive sample for a time longer than that required to yield a significant increase in precision of the measurement. Thus, since substantial experimental errors occur during sample preparation, these errors should be assessed and must be releted to the counting errors for that sample. It is the objective of this presentation to demonstrate that the combination of a realistic statistical assessment of radioactive sample measurement, together with the more sophisticated control mechanisms that modern microprocessor technology make possible, may often enable savings in counter usage of the order of 5-10 fold to be made. (orig.) [de

  5. On the calculation of errors and choice of the parameters of radioisotope following level meters

    International Nuclear Information System (INIS)

    Kalinin, O.V.; Matveev, V.S.; Khatskevich, M.V.

    1979-01-01

    A method for calculating errors of radioisotope following level meters is considered with account of nonlinearity of the system control units. The statistical method of analysis of linear control systems and the approximated method of statistical linearization of nonlinear systems are used during calculating error of a following level meter. Calculation of a nonlinear system by the method of statistical linearization comprises approximation of a nonlinear characteristic by linearized dependence on the base of a certain criterion. Dispersion calculations of output coordinate of a measuring converter are given for different cases of the system input signal. Dependences of fluctuation error on system parameters for level meters with proportional and relay control have been plotted on the base of the given methods. It is stated, that fluctuation error in both cases depend on time constant of a counting rate meter. Minimal error of the level meter decreases with the growth of operating counting rate and with the increase of nonsensitivity zone width. It is also noted, that parameters of the following level meter should be chosen according to requirements for measuring error, device reliability and time of reading fixing

  6. Automatic counting of microglial cell activation and its applications

    Directory of Open Access Journals (Sweden)

    Beatriz I Gallego

    2016-01-01

    Full Text Available Glaucoma is a multifactorial optic neuropathy characterized by the damage and death of the retinal ganglion cells. This disease results in vision loss and blindness. Any vision loss resulting from the disease cannot be restored and nowadays there is no available cure for glaucoma; however an early detection and treatment, could offer neuronal protection and avoid later serious damages to the visual function. A full understanding of the etiology of the disease will still require the contribution of many scientific efforts. Glial activation has been observed in glaucoma, being microglial proliferation a hallmark in this neurodegenerative disease. A typical project studying these cellular changes involved in glaucoma often needs thousands of images - from several animals - covering different layers and regions of the retina. The gold standard to evaluate them is the manual count. This method requires a large amount of time from specialized personnel. It is a tedious process and prone to human error. We present here a new method to count microglial cells by using a computer algorithm. It counts in one hour the same number of images that a researcher counts in four weeks, with no loss of reliability.

  7. Population size estimation in Yellowstone wolves with error-prone noninvasive microsatellite genotypes.

    Science.gov (United States)

    Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas

    2003-07-01

    Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias.

  8. 235U Determination using In-Beam Delayed Neutron Counting Technique at the NRU Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, M. T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bentoumi, G. [Canadian Nuclear Labs., Chalk River, ON (Canada); Corcoran, E. C. [Royal Military College of Canada, Kingston, ON (United States); Dimayuga, I. [Canadian Nuclear Labs., Chalk River, ON (Canada); Kelly, D. G. [Royal Military College of Canada, Kingston, ON (United States); Li, L. [Canadian Nuclear Labs., Chalk River, ON (Canada); Sur, B. [Canadian Nuclear Labs., Chalk River, ON (Canada); Rogge, R. B. [Canadian Nuclear Labs., Chalk River, ON (Canada)

    2015-11-17

    This paper describes a collaborative effort that saw the Royal Military College of Canada (RMC)’s delayed neutron and gamma counting apparatus transported to Canadian Nuclear Laboratories (CNL) for use in the neutron beamline at the National Research Universal (NRU) reactor. Samples containing mg quantities of fissile material were re-interrogated, and their delayed neutron emissions measured. This collaboration offers significant advantages to previous delayed neutron research at both CNL and RMC. This paper details the determination of 235U content in enriched uranium via the assay of in-beam delayed neutron magnitudes and temporal behavior. 235U mass was determined with an average absolute error of ± 2.7 %. This error is lower than that obtained at RMCC for the assay of 235U content in aqueous solutions (3.6 %) using delayed neutron counting. Delayed neutron counting has been demonstrated to be a rapid, accurate, and precise method for special nuclear material detection and identification.

  9. A rotation-symmetric, position-sensitive annular detector for maximum counting rates

    International Nuclear Information System (INIS)

    Igel, S.

    1993-12-01

    The Germanium Wall is a semiconductor detector system containing up to four annular position sensitive ΔE-detectors from high purity germanium (HPGe) planned to complement the BIG KARL spectrometer in COSY experiments. The first diode of the system, the Quirl-detector, has a two dimensional position sensitive structure defined by 200 Archimedes' spirals on each side with opposite orientation. In this way about 40000 pixels are defined. Since each spiral element detects almost the same number of events in an experiment the whole system can be optimized for maximal counting rates. This paper describes a test setup for a first prototype of the Quirl-detector and the results of test measurements with an α-source. The detector current and the electrical separation of the spiral elements were measured. The splitting of signals due to the spread of charge carriers produced by an incident ionizing particle on several adjacent elements was investigated in detail and found to be twice as high as expected from calculations. Its influence on energy and position resolution is discussed. Electronic crosstalk via signal wires and the influence of noise from the magnetic spectrometer has been tested under experimental conditions. Additionally, vacuum feedthroughs based on printed Kapton foils pressed between Viton seals were fabricated and tested successfully concerning their vacuum and thermal properties. (orig.)

  10. Error prevention at a radon measurement service laboratory

    International Nuclear Information System (INIS)

    Cohen, B.L.; Cohen, F.

    1989-01-01

    This article describes the steps taken at a high volume counting laboratory to avoid human, instrument, and computer errors. The laboratory analyzes diffusion barrier charcoal adsorption canisters which have been used to test homes and commercial buildings. A series of computer and human cross-checks are utilized to assure that accurate results are reported to the correct client

  11. On the fast response of charnel electron multipliers in coUnting mode operation

    International Nuclear Information System (INIS)

    Belyaevskij, O.A.; Gladyshev, I.L.; Korobochko, Yu.S.; Mineev, V.I.

    1983-01-01

    Dependences of amplitude distribution of pulses at the outlet of channel electron multipliers (CEM) and effectiveness of monitoring on counting rate at different supply voltages are determined. It is shown that the maximUm counting rate of CEM runs into 6x10 5 s -1 at short-term and 10 5 s -1 at long-term operation using monitoring eqUipment with operation threshold of 2.5 mV

  12. Ultra-fast photon counting with a passive quenching silicon photomultiplier in the charge integration regime

    Science.gov (United States)

    Zhang, Guoqing; Lina, Liu

    2018-02-01

    An ultra-fast photon counting method is proposed based on the charge integration of output electrical pulses of passive quenching silicon photomultipliers (SiPMs). The results of the numerical analysis with actual parameters of SiPMs show that the maximum photon counting rate of a state-of-art passive quenching SiPM can reach ~THz levels which is much larger than that of the existing photon counting devices. The experimental procedure is proposed based on this method. This photon counting regime of SiPMs is promising in many fields such as large dynamic light power detection.

  13. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  14. Integrating chronological uncertainties for annually laminated lake sediments using layer counting, independent chronologies and Bayesian age modelling (Lake Ohau, South Island, New Zealand)

    Science.gov (United States)

    Vandergoes, Marcus J.; Howarth, Jamie D.; Dunbar, Gavin B.; Turnbull, Jocelyn C.; Roop, Heidi A.; Levy, Richard H.; Li, Xun; Prior, Christine; Norris, Margaret; Keller, Liz D.; Baisden, W. Troy; Ditchburn, Robert; Fitzsimons, Sean J.; Bronk Ramsey, Christopher

    2018-05-01

    Annually resolved (varved) lake sequences are important palaeoenvironmental archives as they offer a direct incremental dating technique for high-frequency reconstruction of environmental and climate change. Despite the importance of these records, establishing a robust chronology and quantifying its precision and accuracy (estimations of error) remains an essential but challenging component of their development. We outline an approach for building reliable independent chronologies, testing the accuracy of layer counts and integrating all chronological uncertainties to provide quantitative age and error estimates for varved lake sequences. The approach incorporates (1) layer counts and estimates of counting precision; (2) radiometric and biostratigrapic dating techniques to derive independent chronology; and (3) the application of Bayesian age modelling to produce an integrated age model. This approach is applied to a case study of an annually resolved sediment record from Lake Ohau, New Zealand. The most robust age model provides an average error of 72 years across the whole depth range. This represents a fractional uncertainty of ∼5%, higher than the <3% quoted for most published varve records. However, the age model and reported uncertainty represent the best fit between layer counts and independent chronology and the uncertainties account for both layer counting precision and the chronological accuracy of the layer counts. This integrated approach provides a more representative estimate of age uncertainty and therefore represents a statistically more robust chronology.

  15. Identification of cotton properties to improve yarn count quality by using regression analysis

    International Nuclear Information System (INIS)

    Amin, M.; Ullah, M.; Akbar, A.

    2014-01-01

    Identification of raw material characteristics towards yarn count variation was studied by using statistical techniques. Regression analysis is used to meet the objective. Stepwise regression is used for mode) selection, and coefficient of determination and mean squared error (MSE) criteria are used to identify the contributing factors of cotton properties for yam count. Statistical assumptions of normality, autocorrelation and multicollinearity are evaluated by using probability plot, Durbin Watson test, variance inflation factor (VIF), and then model fitting is carried out. It is found that, invisible (INV), nepness (Nep), grayness (RD), cotton trash (TR) and uniformity index (VI) are the main contributing cotton properties for yarn count variation. The results are also verified by Pareto chart. (author)

  16. Calibration for plutonium-238 lung counting at Mound Laboratory

    International Nuclear Information System (INIS)

    Tomlinson, F.K.

    1976-01-01

    The lung counting facility at Mound Laboratory was calibrated for making plutonium-238 lung deposition assessments in the fall of 1969. Phoswich detectors have been used since that time; however, the technique of calibration has improved considerably. The current technique of calibrating the lung counter is described as well as the method of error analysis and determination of the minimum detectable activity. A Remab hybrid phantom is used along with an attenuation curve which is derived from plutonium loaded lungs and ground beef absorber measurements. The errors that are included in an analysis as well as those that are excluded are described. The method of calculating the minimum detectable activity is also included

  17. Real-time detection and elimination of nonorthogonality error in interference fringe processing

    International Nuclear Information System (INIS)

    Hu Haijiang; Zhang Fengdeng

    2011-01-01

    In the measurement system of interference fringe, the nonorthogonality error is a main error source that influences the precision and accuracy of the measurement system. The detection and elimination of the error has been an important target. A novel method that only uses the cross-zero detection and the counting is proposed to detect and eliminate the nonorthogonality error in real time. This method can be simply realized by means of the digital logic device, because it does not invoke trigonometric functions and inverse trigonometric functions. And it can be widely used in the bidirectional subdivision systems of a Moire fringe and other optical instruments.

  18. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Pixel size is an important parameter of gamma camera and SPECT. A number of methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source (PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (X m ) is an approximation of the true count-centroid (X p ) of the PS, i.e. X m =X p + (X b -X p )/(1+R p /R b ), where Rp is the net counting rate of the PS, X b the background count-centroid and Rb the background counting. To get accurate measurement, R p must be very big, which is unpractical, resulting in the variation of measured pixel size. R p -independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (X b -X p )/(1 + R p /R b ) by bringing X b closer to X p and by reducing R b . In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=1-(0.5) D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent X p . The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128 x 128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10 cps to 1183 cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01 (mean

  19. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  20. An 'intelligent' approach to radioimmunoassay sample counting employing a microprocessor-controlled sample counter

    International Nuclear Information System (INIS)

    Ekins, R.P.; Sufi, S.; Malan, P.G.

    1978-01-01

    The enormous impact on medical science in the last two decades of microanalytical techniques employing radioisotopic labels has, in turn, generated a large demand for automatic radioisotopic sample counters. Such instruments frequently comprise the most important item of capital equipment required in the use of radioimmunoassay and related techniques and often form a principle bottleneck in the flow of samples through a busy laboratory. It is therefore imperative that such instruments should be used 'intelligently' and in an optimal fashion to avoid both the very large capital expenditure involved in the unnecessary proliferation of instruments and the time delays arising from their sub-optimal use. Most of the current generation of radioactive sample counters nevertheless rely on primitive control mechanisms based on a simplistic statistical theory of radioactive sample counting which preclude their efficient and rational use. The fundamental principle upon which this approach is based is that it is useless to continue counting a radioactive sample for a time longer than that required to yield a significant increase in precision of the measurement. Thus, since substantial experimental errors occur during sample preparation, these errors should be assessed and must be related to the counting errors for that sample. The objective of the paper is to demonstrate that the combination of a realistic statistical assessment of radioactive sample measurement, together with the more sophisticated control mechanisms that modern microprocessor technology make possible, may often enable savings in counter usage of the order of 5- to 10-fold to be made. (author)

  1. Quantitative Compton suppression spectrometry at elevated counting rates

    International Nuclear Information System (INIS)

    Westphal, G.P.; Joestl, K.; Schroeder, P.; Lauster, R.; Hausch, E.

    1999-01-01

    For quantitative Compton suppression spectrometry the decrease of coincidence efficiency with counting rate should be made negligible to avoid a virtual increase of relative peak areas of coincident isomeric transitions with counting rate. To that aim, a separate amplifier and discriminator has been used for each of the eight segments of the active shield of a new well-type Compton suppression spectrometer, together with an optimized, minimum dead-time design of the anticoincidence logic circuitry. Chance coincidence losses in the Compton suppression spectrometer are corrected instrumentally by comparing the chance coincidence rate to the counting rate of the germanium detector in a pulse-counting Busy circuit (G.P. Westphal, J. Rad. Chem. 179 (1994) 55) which is combined with the spectrometer's LFC counting loss correction system. The normally not observable chance coincidence rate is reconstructed from the rates of germanium detector and scintillation detector in an auxiliary coincidence unit, after the destruction of true coincidence by delaying one of the coincidence partners. Quantitative system response has been tested in two-source measurements with a fixed reference source of 60 Co of 14 kc/s, and various samples of 137 Cs, up to aggregate counting rates of 180 kc/s for the well-type detector, and more than 1400 kc/s for the BGO shield. In these measurements, the net peak areas of the 1173.3 keV line of 60 Co remained constant at typical values of 37 000 with and 95 000 without Compton suppression, with maximum deviations from the average of less than 1.5%

  2. The utility of point count surveys to predict wildlife interactions with wind energy facilities: An example focused on golden eagles

    Science.gov (United States)

    Sur, Maitreyi; Belthoff, James R.; Bjerre, Emily R.; Millsap, Brian A.; Katzner, Todd

    2018-01-01

    Wind energy development is rapidly expanding in North America, often accompanied by requirements to survey potential facility locations for existing wildlife. Within the USA, golden eagles (Aquila chrysaetos) are among the most high-profile species of birds that are at risk from wind turbines. To minimize golden eagle fatalities in areas proposed for wind development, modified point count surveys are usually conducted to estimate use by these birds. However, it is not always clear what drives variation in the relationship between on-site point count data and actual use by eagles of a wind energy project footprint. We used existing GPS-GSM telemetry data, collected at 15 min intervals from 13 golden eagles in 2012 and 2013, to explore the relationship between point count data and eagle use of an entire project footprint. To do this, we overlaid the telemetry data on hypothetical project footprints and simulated a variety of point count sampling strategies for those footprints. We compared the time an eagle was found in the sample plots with the time it was found in the project footprint using a metric we called “error due to sampling”. Error due to sampling for individual eagles appeared to be influenced by interactions between the size of the project footprint (20, 40, 90 or 180 km2) and the sampling type (random, systematic or stratified) and was greatest on 90 km2 plots. However, use of random sampling resulted in lowest error due to sampling within intermediate sized plots. In addition sampling intensity and sampling frequency both influenced the effectiveness of point count sampling. Although our work focuses on individual eagles (not the eagle populations typically surveyed in the field), our analysis shows both the utility of simulations to identify specific influences on error and also potential improvements to sampling that consider the context-specific manner that point counts are laid out on the landscape.

  3. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  4. Summing coincidence errors using Eu-152 lungs to calibrate a lung-counting system: are they significant?

    International Nuclear Information System (INIS)

    Kramer, Gary H.; Lynch, Timothy P.; Lopez, Maria A.; Hauck, Brian

    2004-01-01

    The use of a lung phantom containing 152Eu/241Am activity can provide a sufficient number of energy lines to generate an efficiency calibration for the in vivo measurements of radioactive materials in the lungs. However, due to the number of energy lines associated with 152Eu, coincidence summing occurs and can present a problem when using such a phantom for calibrating lung-counting systems. A Summing Peak Effect Study was conducted at three laboratories to determine the effect of using an efficiency calibration based on a 152Eu/241Am lung phantom. The measurement data at all three laboratories showed the presence of sum peaks. However, two of the three laboratories found only small biases (<5%) when using the 152Eu/241Am calibration. The third facility noted a 25% to 30% positive bias in the 140-keV to 190-keV energy range that prevents the use of the 152Eu/241Am lung phantom for routine calibrations. Although manufactured by different vendors, the three facilities use similar types of detectors (38 cm2 by 25 mm thick or 38 cm2 by 30 mm thick) for counting. These study results underscore the need to evaluate the coincidence summing effect when using a nuclide such as 152Eu for the calibration of low energy lung counting systems

  5. Estimation of Species Identification Error: Implications for Raptor Migration Counts and Trend Estimation

    Science.gov (United States)

    J.M. Hull; A.M. Fish; J.J. Keane; S.R. Mori; B.J Sacks; A.C. Hull

    2010-01-01

    One of the primary assumptions associated with many wildlife and population trend studies is that target species are correctly identified. This assumption may not always be valid, particularly for species similar in appearance to co-occurring species. We examined size overlap and identification error rates among Cooper's (Accipiter cooperii...

  6. Validation of the ADAMO Care Watch for step counting in older adults.

    Science.gov (United States)

    Magistro, Daniele; Brustio, Paolo Riccardo; Ivaldi, Marco; Esliger, Dale Winfield; Zecca, Massimiliano; Rainoldi, Alberto; Boccia, Gennaro

    2018-01-01

    Accurate measurement devices are required to objectively quantify physical activity. Wearable activity monitors, such as pedometers, may serve as affordable and feasible instruments for measuring physical activity levels in older adults during their normal activities of daily living. Currently few available accelerometer-based steps counting devices have been shown to be accurate at slow walking speeds, therefore there is still lacking appropriate devices tailored for slow speed ambulation, typical of older adults. This study aimed to assess the validity of step counting using the pedometer function of the ADAMO Care Watch, containing an embedded algorithm for measuring physical activity in older adults. Twenty older adults aged ≥ 65 years (mean ± SD, 75±7 years; range, 68-91) and 20 young adults (25±5 years, range 20-40), wore a care watch on each wrist and performed a number of randomly ordered tasks: walking at slow, normal and fast self-paced speeds; a Timed Up and Go test (TUG); a step test and ascending/descending stairs. The criterion measure was the actual number of steps observed, counted with a manual tally counter. Absolute percentage error scores, Intraclass Correlation Coefficients (ICC), and Bland-Altman plots were used to assess validity. ADAMO Care Watch demonstrated high validity during slow and normal speeds (range 0.5-1.5 m/s) showing an absolute error from 1.3% to 1.9% in the older adult group and from 0.7% to 2.7% in the young adult group. The percentage error for the 30-metre walking tasks increased with faster pace in both young adult (17%) and older adult groups (6%). In the TUG test, there was less error in the steps recorded for older adults (1.3% to 2.2%) than the young adults (6.6% to 7.2%). For the total sample, the ICCs for the ADAMO Care Watch for the 30-metre walking tasks at each speed and for the TUG test were ranged between 0.931 to 0.985. These findings provide evidence that the ADAMO Care Watch demonstrated highly accurate

  7. Standardization of Ga-68 by coincidence measurements, liquid scintillation counting and 4πγ counting

    International Nuclear Information System (INIS)

    Roteta, Miguel; Peyres, Virginia; Rodríguez Barquero, Leonor; García-Toraño, Eduardo; Arenillas, Pablo; Balpardo, Christian; Rodrígues, Darío; Llovera, Roberto

    2012-01-01

    The radionuclide 68 Ga is one of the few positron emitters that can be prepared in-house without the use of a cyclotron. It disintegrates to the ground state of 68 Zn partially by positron emission (89.1%) with a maximum energy of 1899.1 keV, and partially by electron capture (10.9%). This nuclide has been standardized in the frame of a cooperation project between the Radionuclide Metrology laboratories from CIEMAT (Spain) and CNEA (Argentina). Measurements involved several techniques: 4πβ−γ coincidences, integral gamma counting and Liquid Scintillation Counting using the triple to double coincidence ratio and the CIEMAT/NIST methods. Given the short half-life of the radionuclide assayed, a direct comparison between results from both laboratories was excluded and a comparison of experimental efficiencies of similar NaI detectors was used instead. - Highlights: ► We standardized the positron emitter Ga-68 in a bilateral cooperation. ► We used several techniques, as coincidence, integral gamma and liquid scintillation. ► An efficiency comparison replaced a direct comparison of reference materials.

  8. Data analysis in emission tomography using emission-count posteriors

    International Nuclear Information System (INIS)

    Sitek, Arkadiusz

    2012-01-01

    A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography. (paper)

  9. Data analysis in emission tomography using emission-count posteriors

    Science.gov (United States)

    Sitek, Arkadiusz

    2012-11-01

    A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.

  10. A New Error Analysis and Accuracy Synthesis Method for Shoe Last Machine

    Directory of Open Access Journals (Sweden)

    Bian Xiangjuan

    2014-05-01

    Full Text Available In order to improve the manufacturing precision of the shoe last machine, a new error-computing model has been put forward to. At first, Based on the special topological structure of the shoe last machine and multi-rigid body system theory, a spatial error-calculating model of the system was built; Then, the law of error distributing in the whole work space was discussed, and the maximum error position of the system was found; At last, The sensitivities of error parameters were analyzed at the maximum position and the accuracy synthesis was conducted by using Monte Carlo method. Considering the error sensitivities analysis, the accuracy of the main parts was distributed. Results show that the probability of the maximal volume error less than 0.05 mm of the new scheme was improved from 0.6592 to 0.7021 than the probability of the old scheme, the precision of the system was improved obviously, the model can be used for the error analysis and accuracy synthesis of the complex multi- embranchment motion chain system, and to improve the system precision of manufacturing.

  11. Errors and mistakes in the traditional optimum design of experiments on exponential absorption

    International Nuclear Information System (INIS)

    Burge, E.J.

    1977-01-01

    The treatment of statistical errors in absorption experiments using particle counters, given by Rose and Shapiro (1948), is shown to be incorrect for non-zero background counts. For the simplest case of only one absorber thickness, revised conditions are computed for the optimum geometry and the best apportionment of counting times for the incident and transmitted beams for a wide range of relative backgrounds (0, 10 -5 -10 2 ). The two geometries of Rose and Shapiro are treated, (I) beam area fixed, absorber thickness varied, and (II) beam area and absorber thickness both varied, but with effective volume of absorber constant. For case (I) the new calculated errors in the absorption coefficients are shown to be about 0.7 of the Rose and Shapiro values for the largest background, and for case (II) about 0.4. The corresponding fractional times for background counts are (I) 0.7 and (II) 0.07 of those given by Rose and Shapiro. For small backgrounds the differences are negligible. Revised values are also computed for the sensitivity of the accuracy to deviations from optimum transmission. (Auth.)

  12. Study of principle error sources in gamma spectrometry. Application to cross sections measurement

    International Nuclear Information System (INIS)

    Majah, M. Ibn.

    1985-01-01

    The principle error sources in gamma spectrometry have been studied in purpose to measure cross sections with great precision. Three error sources have been studied: dead time and pile up which depend on counting rate, and coincidence effect that depends on the disintegration scheme of the radionuclide in question. A constant frequency pulse generator has been used to correct the counting loss due to dead time and pile up in cases of long and short disintegration periods. The loss due to coincidence effect can reach 25% and over, depending on the disintegration scheme and on the distance source-detector. After establishing the correction formula and verifying its validity for four examples: iron 56, scandium 48, antimony 120 and gold 196 m, an application has been done by measuring cross sections of nuclear reactions that lead to long disintegration periods which need short distance source-detector counting and thus correcting the loss due to dead time effect, pile up and coincidence effect. 16 refs., 45 figs., 25 tabs. (author)

  13. Estimating and comparing microbial diversity in the presence of sequencing errors

    Science.gov (United States)

    Chiu, Chun-Huo

    2016-01-01

    Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This

  14. Pharmaceutical Pill Counting and Inspection Using a Capacitive Sensor

    Directory of Open Access Journals (Sweden)

    Ganesan LETCHUMANAN

    2008-01-01

    Full Text Available A capacitive sensor for high-speed counting and inspection of pharmaceutical products is proposed and evaluated. The sensor is based on a patented Electrostatic Field Sensor (EFS device, previously developed by Sparc Systems Limited. However, the sensor head proposed in this work has a significantly different geometry and has been designed with a rectangular inspection aperture of 160mm × 21mm, which best meets applications where a larger count throughput is required with a single sensor. Finite element modelling has been used to simulate the electrostatic fields generated within the sensor, and as a design tool for optimising the sensor head configuration. The actual and simulated performance of the sensor is compared and analysed in terms of the sensor performance at discriminating between damaged products or detection of miscount errors.

  15. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  16. Analysis of error-correction constraints in an optical disk

    Science.gov (United States)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  17. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  18. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  19. Quantum error correction of continuous-variable states against Gaussian noise

    Energy Technology Data Exchange (ETDEWEB)

    Ralph, T. C. [Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics, University of Queensland, St Lucia, Queensland 4072 (Australia)

    2011-08-15

    We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.

  20. Calibration of a neutron log in partially saturated media. Part II. Error analysis

    International Nuclear Information System (INIS)

    Hearst, J.R.; Kasameyer, P.W.; Dreiling, L.A.

    1981-01-01

    Four sources or error (uncertainty) are studied in water content obtained from neutron logs calibrated in partially saturated media for holes up to 3 m. For this calibration a special facility was built and an algorithm for a commercial epithermal neutron log was developed that obtains water content from count rate, bulk density, and gap between the neutron sonde and the borehole wall. The algorithm contained errors due to the calibration and lack of fit, while the field measurements included uncertainties in the count rate (caused by statistics and a short time constant), gap, and density. There can be inhomogeneity in the material surrounding the borehole. Under normal field conditions the hole-size-corrected water content obtained from such neutron logs can have an uncertainty as large as 15% of its value

  1. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    Science.gov (United States)

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Spent fuel bundle counter sequence error manual - RAPPS (200 MW) NGS

    International Nuclear Information System (INIS)

    Nicholson, L.E.

    1992-01-01

    The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message typically contains adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously

  3. Spent fuel bundle counter sequence error manual - KANUPP (125 MW) NGS

    International Nuclear Information System (INIS)

    Nicholson, L.E.

    1992-01-01

    The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message may contain adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously

  4. English word frequency and recognition in bilinguals: Inter-corpus comparison and error analysis.

    Science.gov (United States)

    Shi, Lu-Feng

    2015-01-01

    This study is the second of a two-part investigation on lexical effects on bilinguals' performance on a clinical English word recognition test. Focus is on word-frequency effects using counts provided by four corpora. Frequency of occurrence was obtained for 200 NU-6 words from the Hoosier mental lexicon (HML) and three contemporary corpora, American National Corpora, Hyperspace analogue to language (HAL), and SUBTLEX(US). Correlation analysis was performed between word frequency and error rate. Ten monolinguals and 30 bilinguals participated. Bilinguals were further grouped according to their age of English acquisition and length of schooling/working in English. Word frequency significantly affected word recognition in bilinguals who acquired English late and had limited schooling/working in English. When making errors, bilinguals tended to replace the target word with a word of a higher frequency. Overall, the newer corpora outperformed the HML in predicting error rate. Frequency counts provided by contemporary corpora predict bilinguals' recognition of English monosyllabic words. Word frequency also helps explain top replacement words for misrecognized targets. Word-frequency effects are especially prominent for bilinguals foreign born and educated.

  5. Fluorescence decay data analysis correcting for detector pulse pile-up at very high count rates

    Science.gov (United States)

    Patting, Matthias; Reisch, Paja; Sackrow, Marcus; Dowler, Rhys; Koenig, Marcelle; Wahl, Michael

    2018-03-01

    Using time-correlated single photon counting for the purpose of fluorescence lifetime measurements is usually limited in speed due to pile-up. With modern instrumentation, this limitation can be lifted significantly, but some artifacts due to frequent merging of closely spaced detector pulses (detector pulse pile-up) remain an issue to be addressed. We propose a data analysis method correcting for this type of artifact and the resulting systematic errors. It physically models the photon losses due to detector pulse pile-up and incorporates the loss in the decay fit model employed to obtain fluorescence lifetimes and relative amplitudes of the decay components. Comparison of results with and without this correction shows a significant reduction of systematic errors at count rates approaching the excitation rate. This allows quantitatively accurate fluorescence lifetime imaging at very high frame rates.

  6. CHARACTERIZATION AND AUTOMATIC COUNTING OF F.I.S.H. SIGNALS IN 3-D TISSUE IMAGES

    Directory of Open Access Journals (Sweden)

    Umesh PS Adiga

    2011-05-01

    Full Text Available The evaluation of malignancy-related features often helps to determine the prognoses for patients with carcinomas. One technique, which is becoming increasingly important for assessing such prognostic features is that of Fluorescence in situ Hybridization (FISH. By counting the number of FISH signals in a stack of 2- D images of a tumor (which together constitute the 3-D image volume, it is possible to determine whether there has been any loss or gain of the target DNA sequences and thereby evaluate the stage of the disease. However, visual counting of the FISH signals in this way is a tedious, fatiguing and time-consuming task. Therefore, we have developed an automated system for the quantitative evaluation of FISH signals. We present and discuss the implementation of an image processing module that segments, characterizes and counts the FISH signals in 3-D images of thick prostate tumor tissue specimens. Possible errors in the automatic counting of signals are listed and ways to circumvent these errors are described. We define a feature vector for a FISH signal and describe how we have used the weighted feature vector to segment specific signals from noise artifacts. In addition, we present a method, which allows overlapping FISH signals to be distinguished by fitting a local Gaussian model around the intensity profile and studying the feature vector of each model. Our complete image processing module overcomes the problems of manual counting of FISH signals in 3-D images of tumor specimens, thereby providing improved diagnostic and prognostic capability in qualitative diagnostic pathology.

  7. PVO VENUS ONMS BROWSE SUPRTHRML ION MAX COUNT RATE 12S V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set contains PVO Neutral Mass Spectrometer superthermal ion data. Each record contains the maximum count rate per second in a 12 second period beginning...

  8. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Science.gov (United States)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  9. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.

  10. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    International Nuclear Information System (INIS)

    Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex

    2012-01-01

    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  11. Count-based left ventricular volume determination utilizing a left posterior oblique view for attenuation correction

    International Nuclear Information System (INIS)

    Rabinovitch, M.A.; Kalff, V.; Koral, K.

    1984-01-01

    This study aimed to determine the inherent error of the left ventricular volume measurement from the gated equilibrium blood pool scintigram utilizing the count-based technique. The study population consisted of 26 patients who had undergone biplane contrast ventriculography. The patients were imaged with a parallel-hole collimator in the left anterior oblique position showing the septum to best advantage. A reference blood sample was counted and radionuclide volumes calculated without correction for attenuation. Attenuation corrected volumes were derived with the factor 1/e/sup -/+d/, where d = distance from skin marker to center of the left ventricle in the orthogonal left posterior oblique view and μ = linear attenuation coefficient. A series of μ values from 0.08 to 0.15 cm -1 was evaluated. The tightest 95% confidence limits achieved for an end-diastolic 150-ml ventricle were +/- 44ml, and for an end-systolic 75-ml ventricle +/- 32 ml. In view of the magnitude of inherent error, the count-based volume measurement may be more suitable for group analyses and in cases in which an individual patient serves as his own control

  12. EcoCount

    Directory of Open Access Journals (Sweden)

    Phillip P. Allen

    2014-05-01

    Full Text Available Techniques that analyze biological remains from sediment sequences for environmental reconstructions are well established and widely used. Yet, identifying, counting, and recording biological evidence such as pollen grains remain a highly skilled, demanding, and time-consuming task. Standard procedure requires the classification and recording of between 300 and 500 pollen grains from each representative sample. Recording the data from a pollen count requires significant effort and focused resources from the palynologist. However, when an adaptation to the recording procedure is utilized, efficiency and time economy improve. We describe EcoCount, which represents a development in environmental data recording procedure. EcoCount is a voice activated fully customizable digital count sheet that allows the investigator to continuously interact with a field of view during the data recording. Continuous viewing allows the palynologist the opportunity to remain engaged with the essential task, identification, for longer, making pollen counting more efficient and economical. EcoCount is a versatile software package that can be used to record a variety of environmental evidence and can be installed onto different computer platforms, making the adoption by users and laboratories simple and inexpensive. The user-friendly format of EcoCount allows any novice to be competent and functional in a very short time.

  13. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  14. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  15. Centroid and full-width at half maximum uncertainties of histogrammed data with an underlying Gaussian distribution -- The moments method

    International Nuclear Information System (INIS)

    Valentine, J.D.; Rana, A.E.

    1996-01-01

    The effect of approximating a continuous Gaussian distribution with histogrammed data are studied. The expressions for theoretical uncertainties in centroid and full-width at half maximum (FWHM), as determined by calculation of moments, are derived using the error propagation method for a histogrammed Gaussian distribution. The results are compared with the corresponding pseudo-experimental uncertainties for computer-generated histogrammed Gaussian peaks to demonstrate the effect of binning the data. It is shown that increasing the number of bins in the histogram improves the continuous distribution approximation. For example, a FWHM ≥ 9 and FWHM ≥ 12 bins are needed to reduce the pseudo-experimental standard deviation of FWHM to within ≥5% and ≥1%, respectively, of the theoretical value for a peak containing 10,000 counts. In addition, the uncertainties in the centroid and FWHM as a function of peak area are studied. Finally, Sheppard's correction is applied to partially correct for the binning effect

  16. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  17. RBC count

    Science.gov (United States)

    ... by kidney disease) RBC destruction ( hemolysis ) due to transfusion, blood vessel injury, or other cause Leukemia Malnutrition Bone ... slight risk any time the skin is broken) Alternative Names Erythrocyte count; Red blood cell count; Anemia - RBC count Images Blood test ...

  18. A constant velocity Moessbauer spectrometer free of long-term instrumental drifts in the count rate

    International Nuclear Information System (INIS)

    Sarma, P.R.; Sharma, A.K.; Tripathi, K.C.

    1979-01-01

    Two new control circuits to be used with a constant velocity Moessbauer spectrometer with a loud-speaker drive have been described. The wave-forms generated in the circuits are of the stair-case type instead of the usual square wave-form, so that in each oscillation of the source it remains stationary for a fraction of the time-period. The gamma-rays counted during this period are monitored along with the positive and negative velocity counts and are used to correct any fluctuation in the count rate by feeding these pulses into the timer. The associated logic circuits have been described and the statistical errors involved in the circuits have been computed. (auth.)

  19. Error studies of Halbach Magnets

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, S. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2017-03-02

    These error studies were done on the Halbach magnets for the CBETA “First Girder” as described in note [CBETA001]. The CBETA magnets have since changed slightly to the lattice in [CBETA009]. However, this is not a large enough change to significantly affect the results here. The QF and BD arc FFAG magnets are considered. For each assumed set of error distributions and each ideal magnet, 100 random magnets with errors are generated. These are then run through an automated version of the iron wire multipole cancellation algorithm. The maximum wire diameter allowed is 0.063” as in the proof-of-principle magnets. Initially, 32 wires (2 per Halbach wedge) are tried, then if this does not achieve 1e-­4 level accuracy in the simulation, 48 and then 64 wires. By “1e-4 accuracy”, it is meant the FOM defined by √(Σn≥sextupole an 2+bn 2) is less than 1 unit, where the multipoles are taken at the maximum nominal beam radius, R=23mm for these magnets. The algorithm initially uses 20 convergence interations. If 64 wires does not achieve 1e-­4 accuracy, this is increased to 50 iterations to check for slow converging cases. There are also classifications for magnets that do not achieve 1e-4 but do achieve 1e-3 (FOM ≤ 10 units). This is technically within the spec discussed in the Jan 30, 2017 review; however, there will be errors in practical shimming not dealt with in the simulation, so it is preferable to do much better than the spec in the simulation.

  20. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  1. Digital Counts of Maize Plants by Unmanned Aerial Vehicles (UAVs

    Directory of Open Access Journals (Sweden)

    Friederike Gnädinger

    2017-05-01

    Full Text Available Precision phenotyping, especially the use of image analysis, allows researchers to gain information on plant properties and plant health. Aerial image detection with unmanned aerial vehicles (UAVs provides new opportunities in precision farming and precision phenotyping. Precision farming has created a critical need for spatial data on plant density. The plant number reflects not only the final field emergence but also allows a more precise assessment of the final yield parameters. The aim of this work is to advance UAV use and image analysis as a possible high-throughput phenotyping technique. In this study, four different maize cultivars were planted in plots with different seeding systems (in rows and equidistantly spaced and different nitrogen fertilization levels (applied at 50, 150 and 250 kg N/ha. The experimental field, encompassing 96 plots, was overflown at a 50-m height with an octocopter equipped with a 10-megapixel camera taking a picture every 5 s. Images were recorded between BBCH 13–15 (it is a scale to identify the phenological development stage of a plant which is here the 3- to 5-leaves development stage when the color of young leaves differs from older leaves. Close correlations up to R2 = 0.89 were found between in situ and image-based counted plants adapting a decorrelation stretch contrast enhancement procedure, which enhanced color differences in the images. On average, the error between visually and digitally counted plants was ≤5%. Ground cover, as determined by analyzing green pixels, ranged between 76% and 83% at these stages. However, the correlation between ground cover and digitally counted plants was very low. The presence of weeds and blurry effects on the images represent possible errors in counting plants. In conclusion, the final field emergence of maize can rapidly be assessed and allows more precise assessment of the final yield parameters. The use of UAVs and image processing has the potential to

  2. An intercomparison between gross α counting and gross β counting for grab-sampling determination of airborne radon progeny and thoron progeny

    International Nuclear Information System (INIS)

    Papp, Z.

    2006-01-01

    The instantaneous values of the airborne activity concentrations of radon progeny and thoron progeny have been determined 34 times in a closed and windowless room in a cellar using two independent grab-sampling methods in order to compare the performance of the methods. The activity concentration of radon ( 222 Rn) was also measured and it varied between 200 and 650 Bq m -3 . Two samples of radon and thoron progeny were collected simultaneously from roughly the same air volume by filtering. For the first method, the isotopes were collected on membrane filter and gross α counting was applied over several successive time intervals. This method was a slightly improved version of the methods that are applied generally for this reason for decades. For the second method, the isotopes were collected on glass-fibre filter and gross β counts were registered over several time intervals. This other method was developed a few years ago and the above series of measurements was the first opportunity to make an intercomparison between it and another similar method based on α counting. Individual radon progeny and thoron progeny activity concentrations (for the isotopes 218 Po, 214 Pb, 214 Bi and 212 Pb) were evaluated by both methods. The detailed investigation of the results showed that the systematic deviation of the methods is small but significant and isotope-dependent. The weighted averages of the β/α activity concentration ratios for 218 Po, 214 Pb, 214 Bi, EEDC 222 (Equilibrium-Equivalent Decay-product Concentration of radon progeny) and 212 Pb were 0.99±0.03, 0.90±0.02, 1.03±0.02, 0.96±0.02 and 0.80±0.03, respectively. The source of the systematic deviation is probably the inaccurate knowledge of the counting efficiencies mainly in the case of the α-counting method. A significant random-type difference between the results obtained with the two methods has also been revealed. For example, the β/α ratio for EEDC 222 varied between 0.81±0.01 and 1.22±0

  3. Outlier identification procedures for contingency tables using maximum likelihood and $L_1$ estimates

    NARCIS (Netherlands)

    Kuhnt, S.

    2004-01-01

    Observed cell counts in contingency tables are perceived as outliers if they have low probability under an anticipated loglinear Poisson model. New procedures for the identification of such outliers are derived using the classical maximum likelihood estimator and an estimator based on the L1 norm.

  4. Decoy-state quantum key distribution with both source errors and statistical fluctuations

    International Nuclear Information System (INIS)

    Wang Xiangbin; Yang Lin; Peng Chengzhi; Pan Jianwei

    2009-01-01

    We show how to calculate the fraction of single-photon counts of the 3-intensity decoy-state quantum cryptography faithfully with both statistical fluctuations and source errors. Our results rely only on the bound values of a few parameters of the states of pulses.

  5. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    Science.gov (United States)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  6. Analysis of dental caries using generalized linear and count regression models

    Directory of Open Access Journals (Sweden)

    Javali M. Phil

    2013-11-01

    Full Text Available Generalized linear models (GLM are generalization of linear regression models, which allow fitting regression models to response data in all the sciences especially medical and dental sciences that follow a general exponential family. These are flexible and widely used class of such models that can accommodate response variables. Count data are frequently characterized by overdispersion and excess zeros. Zero-inflated count models provide a parsimonious yet powerful way to model this type of situation. Such models assume that the data are a mixture of two separate data generation processes: one generates only zeros, and the other is either a Poisson or a negative binomial data-generating process. Zero inflated count regression models such as the zero-inflated Poisson (ZIP, zero-inflated negative binomial (ZINB regression models have been used to handle dental caries count data with many zeros. We present an evaluation framework to the suitability of applying the GLM, Poisson, NB, ZIP and ZINB to dental caries data set where the count data may exhibit evidence of many zeros and over-dispersion. Estimation of the model parameters using the method of maximum likelihood is provided. Based on the Vuong test statistic and the goodness of fit measure for dental caries data, the NB and ZINB regression models perform better than other count regression models.

  7. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  8. Measuring worst-case errors in a robot workcell

    International Nuclear Information System (INIS)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot's model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors

  9. Microbiological assessment of house and imported bottled water by comparison of bacterial endotoxin concentration, heterotrophic plate count, and fecal coliform count.

    Science.gov (United States)

    Reyes, Mayra I; Pérez, Cynthia M; Negrón, Edna L

    2008-03-01

    Consumers increasingly use bottled water and home water treatment systems to avoid direct tap water. According to the International Bottled Water Association (IBWA), an industry trade group, 5 billion gallons of bottled water were consumed by North Americans in 2001. The principal aim of this study was to assess the microbial quality of in-house and imported bottled water for human consumption, by measurement and comparison of the concentration of bacterial endotoxin and standard cultivable methods of indicator microorganisms, specifically, heterotrophic and fecal coliform plate counts. A total of 21 brands of commercial bottled water, consisting of 10 imported and 11 in-house brands, selected at random from 96 brands that are consumed in Puerto Rico, were tested at three different time intervals. The Standard Limulus Amebocyte Lysate test, gel clot method, was used to measure the endotoxin concentrations. The minimum endotoxin concentration in 63 water samples was less than 0.0625 EU/mL, while the maximum was 32 EU/mL. The minimum bacterial count showed no growth, while the maximum was 7,500 CFU/mL. Bacterial isolates like P. fluorescens, Corynebacterium sp. J-K, S. paucimobilis, P. versicularis, A. baumannii, P. chlororaphis, F. indologenes, A. faecalis and P. cepacia were identified. Repeated measures analysis of variance demonstrated that endotoxin concentration did not change over time, while there was a statistically significant (p bacterial count over time. In addition, multiple linear regression analysis demonstrated that a unit change in the concentration of endotoxin across time was associated with a significant (p bacterial growth was not detected in some water samples, endotoxin was present. Measurement of Gram-negative bacterial endotoxins is one of the methods that have been suggested as a rapid way of determining bacteriological water quality.

  10. Effects of lek count protocols on greater sage-grouse population trend estimates

    Science.gov (United States)

    Monroe, Adrian; Edmunds, David; Aldridge, Cameron L.

    2016-01-01

    Annual counts of males displaying at lek sites are an important tool for monitoring greater sage-grouse populations (Centrocercus urophasianus), but seasonal and diurnal variation in lek attendance may increase variance and bias of trend analyses. Recommendations for protocols to reduce observation error have called for restricting lek counts to within 30 minutes of sunrise, but this may limit the number of lek counts available for analysis, particularly from years before monitoring was widely standardized. Reducing the temporal window for conducting lek counts also may constrain the ability of agencies to monitor leks efficiently. We used lek count data collected across Wyoming during 1995−2014 to investigate the effect of lek counts conducted between 30 minutes before and 30, 60, or 90 minutes after sunrise on population trend estimates. We also evaluated trends across scales relevant to management, including statewide, within Working Group Areas and Core Areas, and for individual leks. To further evaluate accuracy and precision of trend estimates from lek count protocols, we used simulations based on a lek attendance model and compared simulated and estimated values of annual rate of change in population size (λ) from scenarios of varying numbers of leks, lek count timing, and count frequency (counts/lek/year). We found that restricting analyses to counts conducted within 30 minutes of sunrise generally did not improve precision of population trend estimates, although differences among timings increased as the number of leks and count frequency decreased. Lek attendance declined >30 minutes after sunrise, but simulations indicated that including lek counts conducted up to 90 minutes after sunrise can increase the number of leks monitored compared to trend estimates based on counts conducted within 30 minutes of sunrise. This increase in leks monitored resulted in greater precision of estimates without reducing accuracy. Increasing count

  11. Counting in Lattices: Combinatorial Problems from Statistical Mechanics.

    Science.gov (United States)

    Randall, Dana Jill

    In this thesis we consider two classical combinatorial problems arising in statistical mechanics: counting matchings and self-avoiding walks in lattice graphs. The first problem arises in the study of the thermodynamical properties of monomers and dimers (diatomic molecules) in crystals. Fisher, Kasteleyn and Temperley discovered an elegant technique to exactly count the number of perfect matchings in two dimensional lattices, but it is not applicable for matchings of arbitrary size, or in higher dimensional lattices. We present the first efficient approximation algorithm for computing the number of matchings of any size in any periodic lattice in arbitrary dimension. The algorithm is based on Monte Carlo simulation of a suitable Markov chain and has rigorously derived performance guarantees that do not rely on any assumptions. In addition, we show that these results generalize to counting matchings in any graph which is the Cayley graph of a finite group. The second problem is counting self-avoiding walks in lattices. This problem arises in the study of the thermodynamics of long polymer chains in dilute solution. While there are a number of Monte Carlo algorithms used to count self -avoiding walks in practice, these are heuristic and their correctness relies on unproven conjectures. In contrast, we present an efficient algorithm which relies on a single, widely-believed conjecture that is simpler than preceding assumptions and, more importantly, is one which the algorithm itself can test. Thus our algorithm is reliable, in the sense that it either outputs answers that are guaranteed, with high probability, to be correct, or finds a counterexample to the conjecture. In either case we know we can trust our results and the algorithm is guaranteed to run in polynomial time. This is the first algorithm for counting self-avoiding walks in which the error bounds are rigorously controlled. This work was supported in part by an AT&T graduate fellowship, a University of

  12. Categorical counting.

    Science.gov (United States)

    Fetterman, J Gregor; Killeen, P Richard

    2010-09-01

    Pigeons pecked on three keys, responses to one of which could be reinforced after a few pecks, to a second key after a somewhat larger number of pecks, and to a third key after the maximum pecking requirement. The values of the pecking requirements and the proportion of trials ending with reinforcement were varied. Transits among the keys were an orderly function of peck number, and showed approximately proportional changes with changes in the pecking requirements, consistent with Weber's law. Standard deviations of the switch points between successive keys increased more slowly within a condition than across conditions. Changes in reinforcement probability produced changes in the location of the psychometric functions that were consistent with models of timing. Analyses of the number of pecks emitted and the duration of the pecking sequences demonstrated that peck number was the primary determinant of choice, but that passage of time also played some role. We capture the basic results with a standard model of counting, which we qualify to account for the secondary experiments. Copyright 2010 Elsevier B.V. All rights reserved.

  13. Error Biases in Inner and Overt Speech: Evidence from Tongue Twisters

    Science.gov (United States)

    Corley, Martin; Brocklehurst, Paul H.; Moat, H. Susannah

    2011-01-01

    To compare the properties of inner and overt speech, Oppenheim and Dell (2008) counted participants' self-reported speech errors when reciting tongue twisters either overtly or silently and found a bias toward substituting phonemes that resulted in words in both conditions, but a bias toward substituting similar phonemes only when speech was…

  14. Counting statistics in low level radioactivity measurements fluctuating counting efficiency

    International Nuclear Information System (INIS)

    Pazdur, M.F.

    1976-01-01

    A divergence between the probability distribution of the number of nuclear disintegrations and the number of observed counts, caused by counting efficiency fluctuation, is discussed. The negative binominal distribution is proposed to describe the probability distribution of the number of counts, instead of Poisson distribution, which is assumed to hold for the number of nuclear disintegrations only. From actual measurements the r.m.s. amplitude of counting efficiency fluctuation is estimated. Some consequences of counting efficiency fluctuation are investigated and the corresponding formulae are derived: (1) for detection limit as a function of the number of partial measurements and the relative amplitude of counting efficiency fluctuation, and (2) for optimum allocation of the number of partial measurements between sample and background. (author)

  15. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    Energy Technology Data Exchange (ETDEWEB)

    Gopich, Irina V. [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892 (United States)

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  16. FEL small signal gain reduction due to phase error of undulator

    International Nuclear Information System (INIS)

    Jia Qika

    2002-01-01

    The effects of undulator phase errors on the Free Electron Laser small signal gain is analyzed and discussed. The gain reduction factor due to the phase error is given analytically for low-gain regimes, it shows that degradation of the gain is similar to that of the spontaneous radiation, has a simple exponential relation with square of the rms phase error, and the linear variation part of phase error induces the position shift of maximum gain. The result also shows that the Madey's theorem still hold in the presence of phase error. The gain reduction factor due to the phase error for high-gain regimes also can be given in a simple way

  17. Cryptographic robustness of a quantum cryptography system using phase-time coding

    International Nuclear Information System (INIS)

    Molotkov, S. N.

    2008-01-01

    A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In the absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.

  18. Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class

    Science.gov (United States)

    Novitasari, N.; Lukito, A.; Ekawati, R.

    2018-01-01

    A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.

  19. Information loss for 2 × 2 tables with missing cell counts: binomial case

    NARCIS (Netherlands)

    Eisinga, R.N.

    2008-01-01

    We formulate likelihood-based ecological inference for 2 × 2 tables with missing cell counts as an incomplete data problem and study Fisher information loss by comparing estimation from complete and incomplete data. In so doing, we consider maximum-likelihood (ML) estimators of probabilities

  20. Information loss for 2×2 tables with missing cell counts : binomial case

    NARCIS (Netherlands)

    Eisinga, Rob

    2008-01-01

    We formulate likelihood-based ecological inference for 2×2 tables with missing cell counts as an incomplete data problem and study Fisher information loss by comparing estimation from complete and incomplete data. In so doing, we consider maximum-likelihood (ML) estimators of probabilities governed

  1. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  2. Requirements on the Redshift Accuracy for future Supernova and Number Count Surveys

    International Nuclear Information System (INIS)

    Huterer, Dragan; Kim, Alex; Broderick, Tamara

    2004-01-01

    We investigate the required redshift accuracy of type Ia supernova and cluster number-count surveys in order for the redshift uncertainties not to contribute appreciably to the dark energy parameter error budget. For the SNAP supernova experiment, we find that, without the assistance of ground-based measurements, individual supernova redshifts would need to be determined to about 0.002 or better, which is a challenging but feasible requirement for a low-resolution spectrograph. However, we find that accurate redshifts for z < 0.1 supernovae, obtained with ground-based experiments, are sufficient to immunize the results against even relatively large redshift errors at high z. For the future cluster number-count surveys such as the South Pole Telescope, Planck or DUET, we find that the purely statistical error in photometric redshift is less important, and that the irreducible, systematic bias in redshift drives the requirements. The redshift bias will have to be kept below 0.001-0.005 per redshift bin (which is determined by the filter set), depending on the sky coverage and details of the definition of the minimal mass of the survey. Furthermore, we find that X-ray surveys have a more stringent required redshift accuracy than Sunyaev-Zeldovich (SZ) effect surveys since they use a shorter lever arm in redshift; conversely, SZ surveys benefit from their high redshift reach only so long as some redshift information is available for distant (zgtrsim1) clusters

  3. Ultrafast time measurements by time-correlated single photon counting coupled with superconducting single photon detector

    Energy Technology Data Exchange (ETDEWEB)

    Shcheslavskiy, V., E-mail: vis@becker-hickl.de; Becker, W. [Becker & Hickl GmbH, Nahmitzer Damm 30, 12277 Berlin (Germany); Morozov, P.; Divochiy, A. [Scontel, Rossolimo St., 5/22-1, Moscow 119021 (Russian Federation); Vakhtomin, Yu. [Scontel, Rossolimo St., 5/22-1, Moscow 119021 (Russian Federation); Moscow State Pedagogical University, 1/1 M. Pirogovskaya St., Moscow 119991 (Russian Federation); Smirnov, K. [Scontel, Rossolimo St., 5/22-1, Moscow 119021 (Russian Federation); Moscow State Pedagogical University, 1/1 M. Pirogovskaya St., Moscow 119991 (Russian Federation); National Research University Higher School of Economics, 20 Myasnitskaya St., Moscow 101000 (Russian Federation)

    2016-05-15

    Time resolution is one of the main characteristics of the single photon detectors besides quantum efficiency and dark count rate. We demonstrate here an ultrafast time-correlated single photon counting (TCSPC) setup consisting of a newly developed single photon counting board SPC-150NX and a superconducting NbN single photon detector with a sensitive area of 7 × 7 μm. The combination delivers a record instrument response function with a full width at half maximum of 17.8 ps and system quantum efficiency ∼15% at wavelength of 1560 nm. A calculation of the root mean square value of the timing jitter for channels with counts more than 1% of the peak value yielded about 7.6 ps. The setup has also good timing stability of the detector–TCSPC board.

  4. Influence of Ephemeris Error on GPS Single Point Positioning Accuracy

    Science.gov (United States)

    Lihua, Ma; Wang, Meng

    2013-09-01

    The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.

  5. Research on the Method of Noise Error Estimation of Atomic Clocks

    Science.gov (United States)

    Song, H. J.; Dong, S. W.; Li, W.; Zhang, J. H.; Jing, Y. J.

    2017-05-01

    The simulation methods of different noises of atomic clocks are given. The frequency flicker noise of atomic clock is studied by using the Markov process theory. The method for estimating the maximum interval error of the frequency white noise is studied by using the Wiener process theory. Based on the operation of 9 cesium atomic clocks in the time frequency reference laboratory of NTSC (National Time Service Center), the noise coefficients of the power-law spectrum model are estimated, and the simulations are carried out according to the noise models. Finally, the maximum interval error estimates of the frequency white noises generated by the 9 cesium atomic clocks have been acquired.

  6. Comparison of least-squares vs. maximum likelihood estimation for standard spectrum technique of β−γ coincidence spectrum analysis

    International Nuclear Information System (INIS)

    Lowrey, Justin D.; Biegalski, Steven R.F.

    2012-01-01

    The spectrum deconvolution analysis tool (SDAT) software code was written and tested at The University of Texas at Austin utilizing the standard spectrum technique to determine activity levels of Xe-131m, Xe-133m, Xe-133, and Xe-135 in β–γ coincidence spectra. SDAT was originally written to utilize the method of least-squares to calculate the activity of each radionuclide component in the spectrum. Recently, maximum likelihood estimation was also incorporated into the SDAT tool. This is a robust statistical technique to determine the parameters that maximize the Poisson distribution likelihood function of the sample data. In this case it is used to parameterize the activity level of each of the radioxenon components in the spectra. A new test dataset was constructed utilizing Xe-131m placed on a Xe-133 background to compare the robustness of the least-squares and maximum likelihood estimation methods for low counting statistics data. The Xe-131m spectra were collected independently from the Xe-133 spectra and added to generate the spectra in the test dataset. The true independent counts of Xe-131m and Xe-133 are known, as they were calculated before the spectra were added together. Spectra with both high and low counting statistics are analyzed. Studies are also performed by analyzing only the 30 keV X-ray region of the β–γ coincidence spectra. Results show that maximum likelihood estimation slightly outperforms least-squares for low counting statistics data.

  7. Optical losses due to tracking error estimation for a low concentrating solar collector

    International Nuclear Information System (INIS)

    Sallaberry, Fabienne; García de Jalón, Alberto; Torres, José-Luis; Pujol-Nadal, Ramón

    2015-01-01

    Highlights: • A solar thermal collector with low concentration and one-axis tracking was tested. • A quasi-dynamic testing procedure for IAM was defined for tracking collector. • The adequation between the concentrator optics and the tracking was checked. • The maximum and long-term optical losses due to tracking error were calculated. - Abstract: The determination of the accuracy of a solar tracker used in domestic hot water solar collectors is not yet standardized. However, while using optical concentration devices, it is important to use a solar tracker with adequate precision with regard to the specific optical concentration factor. Otherwise, the concentrator would sustain high optical losses due to the inadequate focusing of the solar radiation onto its receiver, despite having a good quality. This study is focused on the estimation of long-term optical losses due to the tracking error of a low-temperature collector using low-concentration optics. For this purpose, a testing procedure for the incidence angle modifier on the tracking plane is proposed to determinate the acceptance angle of its concentrator even with different longitudinal incidence angles along the focal line plane. Then, the impact of maximum tracking error angle upon the optical efficiency has been determined. Finally, the calculation of the long-term optical error due to the tracking errors, using the design angular tracking error declared by the manufacturer, is carried out. The maximum tracking error calculated for this collector imply an optical loss of about 8.5%, which is high, but the average long-term optical loss calculated for one year was about 1%, which is reasonable for such collectors used for domestic hot water

  8. Minimum Tracking Error Volatility

    OpenAIRE

    Luca RICCETTI

    2010-01-01

    Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...

  9. NQAR: Network Quality Aware Routing in Error-Prone Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jaewon Choi

    2010-01-01

    Full Text Available We propose a network quality aware routing (NQAR mechanism to provide an enabling method of the delay-sensitive data delivery over error-prone wireless sensor networks. Unlike the existing routing methods that select routes with the shortest arrival latency or the minimum hop count, the proposed scheme adaptively selects the route based on the network qualities including link errors and collisions with minimum additional complexity. It is designed to avoid the paths with potential noise and collision that may cause many non-deterministic backoffs and retransmissions. We propose a generic framework to select a minimum cost route that takes the packet loss rate and collision history into account. NQAR uses a data centric approach to estimate a single-hop delay based on processing time, propagation delay, packet loss rate, number of backoffs, and the retransmission timeout between two neighboring nodes. This enables a source node to choose the shortest expected end-to-end delay path to send a delay-sensitive data. The experiment results show that NQAR reduces the end-to-end transfer delay up to approximately 50% in comparison with the latency-based directed diffusion and the hop count-based directed diffusion under the error-prone network environments. Moreover, NQAR shows better performance than those routing methods in terms of jitter, reachability, and network lifetime.

  10. Project for an analogue divider using electronic counting

    International Nuclear Information System (INIS)

    Novat, J.

    1964-01-01

    The apparatus which has been developed is designed to give the reciprocal of a number between 10 3 and 10 7 . In practice this number can be the pulse count provided during a given time by a detector of the BF 3 type during a criticality experiment. The apparatus is made up of two parts: one provides, by means of relays, a voltage proportional to the reciprocal required, the other is a numeric voltmeter measuring this voltage between 0.1 and 1 volt. The relative error of the result is under 5 per cent. (author) [fr

  11. Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators

    Science.gov (United States)

    Fantini, Jay A.

    1998-01-01

    Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.

  12. A 500-MHz x-ray counting system with a silicon avalanche photodiode

    International Nuclear Information System (INIS)

    Kishimoto, Shunji

    2009-01-01

    In the present measurements using a Si-APD X-ray detector and a 500-MHz counting system, the maximum output rate of 3.3x10 8 s -1 was achieved for 8-keV X-rays in beamline BL-14A of the Photon Factory. A small Si-APD of 4-pF electric capacity was used as the detector device in order to output a pulse of a width shorter than 2 ns on the baseline. For processing the fast pulses, the discriminator and the scaler having a throughput of >500 MHz, were prepared. Since the acceleration frequency at the PF ring was 500.1 MHz and the empty-bunch spacing was 12/312 bunches per circumference, the expected maximum rate was 4.8x10 8s-1 according to the counting model for a pulsed photon source. The reason why the present system did not reach the expected value was the baseline shift at the amplifier outputs. The rise of +0.2 V was observed at a discriminator output of 3.3x10 8 s -1 , while the pulse height was lower than 0.2 V. The baseline shift was caused by an AC coupling circuit in the amplifier. If a DC coupling circuit can be used for the amplifier, instead of the AC coupling circuit, or an active adjustment to compensate the baseline shift is installed, the counting system will show an ideal response. Although the present system including NIM modules was not so compact, we would like to develop a new fast-counting circuit for a Si-APD array detector of more than 100 channels of small pixels, in near future. (author)

  13. Comparison of multinomial and binomial proportion methods for analysis of multinomial count data.

    Science.gov (United States)

    Galyean, M L; Wester, D B

    2010-10-01

    Simulation methods were used to generate 1,000 experiments, each with 3 treatments and 10 experimental units/treatment, in completely randomized (CRD) and randomized complete block designs. Data were counts in 3 ordered or 4 nominal categories from multinomial distributions. For the 3-category analyses, category probabilities were 0.6, 0.3, and 0.1, respectively, for 2 of the treatments, and 0.5, 0.35, and 0.15 for the third treatment. In the 4-category analysis (CRD only), probabilities were 0.3, 0.3, 0.2, and 0.2 for treatments 1 and 2 vs. 0.4, 0.4, 0.1, and 0.1 for treatment 3. The 3-category data were analyzed with generalized linear mixed models as an ordered multinomial distribution with a cumulative logit link or by regrouping the data (e.g., counts in 1 category/sum of counts in all categories), followed by analysis of single categories as binomial proportions. Similarly, the 4-category data were analyzed as a nominal multinomial distribution with a glogit link or by grouping data as binomial proportions. For the 3-category CRD analyses, empirically determined type I error rates based on pair-wise comparisons (F- and Wald chi(2) tests) did not differ between multinomial and individual binomial category analyses with 10 (P = 0.38 to 0.60) or 50 (P = 0.19 to 0.67) sampling units/experimental unit. When analyzed as binomial proportions, power estimates varied among categories, with analysis of the category with the greatest counts yielding power similar to the multinomial analysis. Agreement between methods (percentage of experiments with the same results for the overall test for treatment effects) varied considerably among categories analyzed and sampling unit scenarios for the 3-category CRD analyses. Power (F-test) was 24.3, 49.1, 66.9, 83.5, 86.8, and 99.7% for 10, 20, 30, 40, 50, and 100 sampling units/experimental unit for the 3-category multinomial CRD analyses. Results with randomized complete block design simulations were similar to those with the CRD

  14. Fly's Eye: a counting camera for thermal neutrons, some applications, problems, and prospects

    International Nuclear Information System (INIS)

    Davidson, J.B.

    1975-01-01

    An area detector for thermal neutrons based on image intensification techniques is described and some capabilities and limitations of the detection system are discussed. Among the former are high spatial resolution high instantaneous counting rate, electronic zoom, time-gating, and integration. The detector is limited in that the maximum counting rate for a resolution element is 60 regularly spaced counts per second. Also, the nonuniformity of response over the detector limits the useful size and requires point-by-point calibration. In addition, a higher efficiency for neutron detection would be desirable. Some typical applications of the system are: crystal inspection, neutron magnetic diffraction topography, and searches for temperature induced changes in diffraction patterns. The future application of solid state television sensors and microchannel plate intensifiers to improve the system are briefly mentioned. (U.S.)

  15. Fly's eye: a counting camera for thermal neutrons: some applications, problems, and prospects

    International Nuclear Information System (INIS)

    Davidson, J.B.

    1976-01-01

    An area detector for thermal neutrons based on image intensification techniques is described. Some capabilities and limitations of the detection system are discussed. Among the former are high spatial resolution, high instantaneous counting rate, electronic zoom, time-gating, and integration. The detector is limited in that the maximum counting rate for a resolution element is 60 regularly spaced counts per second. Also, the nonuniformity of response over the detector puts a limit on the useful size and necessitates point-by-point calibration. In addition, a higher efficiency for neutron detection would be desirable. Some typical applications of the system are crystal inspection, neutron magnetic diffraction topography, and searches for temperature-induced changes in diffraction patterns. The future application of solid-state television sensors and microchannel-plate intensifiers to improve the system is briefly mentioned

  16. Selection of non-destructive assay methods: Neutron counting or calorimetric assay?

    International Nuclear Information System (INIS)

    Cremers, T.L.; Wachter, J.R.

    1994-01-01

    The transition of DOE facilities from production to D ampersand D has lead to more measurements of product, waste, scrap, and other less attractive materials. Some of these materials are difficult to analyze by either neutron counting or calorimetric assay. To determine the most efficacious analysis method, variety of materials, impure salts and hydrofluorination residues have been assayed by both calorimetric assay and neutron counting. New data will be presented together with a review of published data. The precision and accuracy of these measurements are compared to chemistry values and are reported. The contribution of the gamma ray isotopic determination measurement to the overall error of the calorimetric assay or neutron assay is examined and discussed. Other factors affecting selection of the most appropriate non-destructive assay method are listed and considered

  17. Adaptive color halftoning for minimum perceived error using the blue noise mask

    Science.gov (United States)

    Yu, Qing; Parker, Kevin J.

    1997-04-01

    Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.

  18. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    Energy Technology Data Exchange (ETDEWEB)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico; Reichardt, Christian L. [School of Physics, University of Melbourne, 313 David Caro building, Swanston St and Tin Alley, Parkville VIC 3010 (Australia); Baxter, Eric J. [Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd Street, Philadelphia, PA 19104 (United States); Bleem, Lindsey E. [Argonne National Laboratory, High-Energy Physics Division, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Crawford, Thomas M. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Holder, Gilbert P. [Department of Astronomy and Department of Physics, University of Illinois, 1002 West Green St., Urbana, IL 61801 (United States); Manzotti, Alessandro, E-mail: srinivasan.raghunathan@unimelb.edu.au, E-mail: s.patil2@student.unimelb.edu.au, E-mail: ebax@sas.upenn.edu, E-mail: federico.bianchini@unimelb.edu.au, E-mail: bleeml@uchicago.edu, E-mail: tcrawfor@kicp.uchicago.edu, E-mail: gholder@illinois.edu, E-mail: manzotti@uchicago.edu, E-mail: christian.reichardt@unimelb.edu.au [Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States)

    2017-08-01

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.

  19. Solid-State Neutron Multiplicity Counting System Using Commercial Off-the-Shelf Semiconductor Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Rozhdestvenskyy, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-09

    This work iterates on the first demonstration of a solid-state neutron multiplicity counting system developed at Lawrence Livermore National Laboratory by using commercial off-the-shelf detectors. The system was demonstrated to determine the mass of a californium-252 neutron source within 20% error requiring only one-hour measurement time with 20 cm2 of active detector area.

  20. Tower counts

    Science.gov (United States)

    Woody, Carol Ann; Johnson, D.H.; Shrier, Brianna M.; O'Neal, Jennifer S.; Knutzen, John A.; Augerot, Xanthippe; O'Neal, Thomas A.; Pearsons, Todd N.

    2007-01-01

    Counting towers provide an accurate, low-cost, low-maintenance, low-technology, and easily mobilized escapement estimation program compared to other methods (e.g., weirs, hydroacoustics, mark-recapture, and aerial surveys) (Thompson 1962; Siebel 1967; Cousens et al. 1982; Symons and Waldichuk 1984; Anderson 2000; Alaska Department of Fish and Game 2003). Counting tower data has been found to be consistent with that of digital video counts (Edwards 2005). Counting towers do not interfere with natural fish migration patterns, nor are fish handled or stressed; however, their use is generally limited to clear rivers that meet specific site selection criteria. The data provided by counting tower sampling allow fishery managers to determine reproductive population size, estimate total return (escapement + catch) and its uncertainty, evaluate population productivity and trends, set harvest rates, determine spawning escapement goals, and forecast future returns (Alaska Department of Fish and Game 1974-2000 and 1975-2004). The number of spawning fish is determined by subtracting subsistence, sport-caught fish, and prespawn mortality from the total estimated escapement. The methods outlined in this protocol for tower counts can be used to provide reasonable estimates ( plus or minus 6%-10%) of reproductive salmon population size and run timing in clear rivers. 

  1. Safe and effective error rate monitors for SS7 signaling links

    Science.gov (United States)

    Schmidt, Douglas C.

    1994-04-01

    This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.

  2. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  3. IDENTIFICATION OF BACTERIA CAUSING DIARRHOEA IN HIV/AIDS PATIENTS AND ITS CORRELATION WITH CD4 COUNT

    Directory of Open Access Journals (Sweden)

    Anand Premanand

    2016-05-01

    Full Text Available BACKGROUND The number of HIV-positive patients is increasing in India. Data on the prevalence of diarrhoea and the spectrum of bacteria responsible for diarrhoea in HIV- positive patients is lacking in our area. The identification of enteric pathogens in patients with HIV/AIDS is important because an increasing array of therapeutic regimens is becoming available to treat many of these infections. Thus, an attempt is done to elucidate the associations between causative bacteria of acute and chronic diarrhoea and CD4 count. METHODS Stool specimens were obtained over a period of eighteen months from HIV infected adults with diarrhoea presenting to Shri B M Patil Medical College Hospital and Research Centre, Vijayapura. In all patients with diarrhoea, stool specimens were examined by microscopy and cultures to identify bacterial pathogens and blood sample was analysed for CD4 count. RESULTS A total of 80 individuals were enrolled in this study. Cases included 46 males and 34 females. Among the cases, maximum subjects were found to be in the age group of 30-40 years in which 23 (62.2% were males and 14 (37.8% were females. 56 had acute and 24 had chronic diarrhoea. The percentages of bacteria isolated were 5 (8.9% in acute and 16 (66.7% in chronic diarrhoea respectively. The most common bacteria isolated was E. Coli (17.5% followed by Klebsiella (5% and Shigella Sps (3.75%. Patients with chronic diarrhoea had lower CD4 cell counts. The maximum bacterial isolation was in the patients whose CD4 cell counts were below 200 cells/mm3. CONCLUSION Bacterial isolation was most strongly associated with low CD4 counts and chronic diarrhoea. E. coli was isolated maximum among all the bacteria in the HIV patients. Over two-thirds of diarrhoeal episodes were undiagnosed, suggesting that unidentified agents or primary HIV enteropathy are important causes of diarrhoea in this population. There is a strong negative association between duration of diarrhoea and CD4

  4. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  5. Counting carbohydrates

    Science.gov (United States)

    Carb counting; Carbohydrate-controlled diet; Diabetic diet; Diabetes-counting carbohydrates ... Many foods contain carbohydrates (carbs), including: Fruit and fruit juice Cereal, bread, pasta, and rice Milk and milk products, soy milk Beans, legumes, ...

  6. Repeatability of differential goat bulk milk culture and associations with somatic cell count, total bacterial count, and standard plate count

    OpenAIRE

    Koop, G.; Dik, N.; Nielen, M.; Lipman, L.J.A.

    2010-01-01

    The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms, 3 bulk milk samples were collected at intervals of 2 wk. The samples were cultured for SPC, coliform count, and staphylococcal count and for the presence of Staphylococcus aureus. Furthermore, SCC ...

  7. Sperm count as a surrogate endpoint for male fertility control.

    Science.gov (United States)

    Benda, Norbert; Gerlinger, Christoph

    2007-11-30

    When assessing the effectiveness of a hormonal method of fertility control in men, the classical approach used for the assessment of hormonal contraceptives in women, by estimating the pregnancy rate or using a life-table analysis for the time to pregnancy, is difficult to apply in a clinical development program. The main reasons are the dissociation of the treated unit, i.e. the man, and the observed unit, i.e. his female partner, the high variability in the frequency of male intercourse, the logistical cost and ethical concerns related to the monitoring of the trial. A reasonable surrogate endpoint of the definite endpoint time to pregnancy is sperm count. In addition to the avoidance of the mentioned problems, trials that compare different treatments are possible with reasonable sample sizes, and study duration can be shorter. However, current products do not suppress sperm production to 100 per cent in all men and the sperm count is only observed with measurement error. Complete azoospermia might not be necessary in order to achieve an acceptable failure rate compared with other forms of male fertility control. Therefore, the use of sperm count as a surrogate endpoint must rely on the results of a previous trial in which both the definitive- and surrogate-endpoint results were assessed. The paper discusses different estimation functions of the mean pregnancy rate (corresponding to the cumulative hazard) that are based on the results of sperm count trial and a previous trial in which both sperm count and time to pregnancy were assessed, as well as the underlying assumptions. Sample size estimations are given for pregnancy rate estimation with a given precision.

  8. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    International Nuclear Information System (INIS)

    Kanphet, J; Suriyapee, S; Sanghangthum, T; Kumkhwao, J; Wisetrintong, M; Dumrongkijudom, N

    2016-01-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable. (paper)

  9. Calibration of a liquid scintillation counter for alpha, beta and Cerenkov counting

    International Nuclear Information System (INIS)

    Scarpitta, S.C.; Fisenne, I.M.

    1996-07-01

    Calibration data are presented for 25 radionuclides that were individually measured in a Packard Tri-Carb 2250CA liquid scintillation (LS) counter by both conventional and Cerenkov detection techniques. The relationships and regression data between the quench indicating parameters and the LS counting efficiencies were determined using microliter amounts of tracer added to low 40 K borosilicate glass vials containing 15 mL of Insta-Gel XF scintillation cocktail. Using 40 K, the detection efficiencies were linear over a three order of magnitude range (10 - 10,000 mBq) in beta activity for both LS and Cerenkov counting. The Cerenkov counting efficiency (CCE) increased linearly (42% per MeV) from 0.30 to 2.0 MeV, whereas the LS efficiency was >90% for betas with energy in excess of 0.30 MeV. The CCE was 20 - 50% less than the LS counting efficiency for beta particles with maximum energies in excess of 1 MeV. Based on replicate background measurements, the lower limit of detection (LLD) for a 1-h count at the 95% confidence level, using water as a solvent, was 0.024 counts sec- -1 and 0.028 counts sec-1 for plastic and glass vials, respectively. The LLD for a 1-h-count ranged from 46 to 56 mBq (2.8 - 3.4 dpm) for both Cerenkov and conventional LS counting. This assumes: (1) a 100% counting efficiency, (2) a 50% yield of the nuclide of interest, (3) a 1-h measurement time using low background plastic vials, and (4) a 0-50 keV region of interest. The LLD is reduced an order of magnitude when the yield recovery exceeds 90% and a lower background region is used (i.e., 100 - 500 keV alpha region of interest). Examples and applications of both Cerenkov and LS counting techniques are given in the text and appendices

  10. Spot counting on fluorescence in situ hybridization in suspension images using Gaussian mixture model

    Science.gov (United States)

    Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin

    2015-03-01

    Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.

  11. Analysis of the "naming game" with learning errors in communications.

    Science.gov (United States)

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  12. Repeat-aware modeling and correction of short read errors.

    Science.gov (United States)

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  13. Corrections for the combined effects of decay and dead time in live-timed counting of short-lived radionuclides

    International Nuclear Information System (INIS)

    Fitzgerald, R.

    2016-01-01

    Studies and calibrations of short-lived radionuclides, for example "1"5O, are of particular interest in nuclear medicine. Yet counting experiments on such species are vulnerable to an error due to the combined effect of decay and dead time. Separate decay corrections and dead-time corrections do not account for this issue. Usually counting data are decay-corrected to the start time of the count period, or else instead of correcting the count rate, the mid-time of the measurement is used as the reference time. Correction factors are derived for both those methods, considering both extending and non-extending dead time. Series approximations are derived here and the accuracy of those approximations are discussed. - Highlights: • Derived combined effects of decay and dead time. • Derived for counting systems with extending or non-extending dead times. • Derived series expansions for both midpoint and decay-to-start-time methods. • Useful for counting experiments with short-lived radionuclides. • Examples given for "1"5O, used in PET scanning.

  14. Fractional counts-the simulation of low probability events

    International Nuclear Information System (INIS)

    Coldwell, R.L.; Lasche, G.P.; Jadczyk, A.

    2001-01-01

    The code RobSim has been added to RobWin.1 It simulates spectra resulting from gamma rays striking an array of detectors made up of different components. These are frequently used to set coincidence and anti-coincidence windows that decide if individual events are part of the signal. The first problem addressed is the construction of the detector. Then owing to the statistical nature of the responses of these elements there is a random nature in the response that can be taken into account by including fractional counts in the output spectrum. This somewhat complicates the error analysis, as Poisson statistics are no longer applicable

  15. Development of low level alpha particle counting system

    International Nuclear Information System (INIS)

    Minobe, Masao; Kondo, Hiraku; Chinuki, Takashi; Hirano, Hiromichi

    1987-01-01

    Much attention has been paid to the trace analysis of uranium and thorium contained in the base material of LSI or VLSI, since the so-called ''soft-error'' of the memory device was known to be due to alpha particles emitted from these radioactive elements. We have developed an apparatus to meet the needs of estimating such a very small quantity of U and Th of the level of ppb, by directly counting alpha particles using a gas-flow type proportional counter. This method requires no sophisticated analytical skill, and the accuracy of the result is satisfactory. The instrumentation and some application of this apparatus are described. (author)

  16. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  17. Error analysis of the phase-shifting technique when applied to shadow moire

    International Nuclear Information System (INIS)

    Han, Changwoon; Han Bongtae

    2006-01-01

    An exact solution for the intensity distribution of shadow moire fringes produced by a broad spectrum light is presented. A mathematical study quantifies errors in fractional fringe orders determined by the phase-shifting technique, and its validity is corroborated experimentally. The errors vary cyclically as the distance between the reference grating and the specimen increases. The amplitude of the maximum error is approximately 0.017 fringe, which defines the theoretical limit of resolution enhancement offered by the phase-shifting technique

  18. Performance in population models for count data, part II: a new SAEM algorithm

    Science.gov (United States)

    Savic, Radojka; Lavielle, Marc

    2009-01-01

    Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795

  19. Enhanced coulomb counting method for estimating state-of-charge and state-of-health of lithium-ion batteries

    International Nuclear Information System (INIS)

    Ng, Kong Soon; Moo, Chin-Sien; Chen, Yi-Ping; Hsieh, Yao-Ching

    2009-01-01

    The coulomb counting method is expedient for state-of-charge (SOC) estimation of lithium-ion batteries with high charging and discharging efficiencies. The charging and discharging characteristics are investigated and reveal that the coulomb counting method is convenient and accurate for estimating the SOC of lithium-ion batteries. A smart estimation method based on coulomb counting is proposed to improve the estimation accuracy. The corrections are made by considering the charging and operating efficiencies. Furthermore, the state-of-health (SOH) is evaluated by the maximum releasable capacity. Through the experiments that emulate practical operations, the SOC estimation method is verified to demonstrate the effectiveness and accuracy.

  20. Regression analysis of mixed recurrent-event and panel-count data.

    Science.gov (United States)

    Zhu, Liang; Tong, Xinwei; Sun, Jianguo; Chen, Manhua; Srivastava, Deo Kumar; Leisenring, Wendy; Robison, Leslie L

    2014-07-01

    In event history studies concerning recurrent events, two types of data have been extensively discussed. One is recurrent-event data (Cook and Lawless, 2007. The Analysis of Recurrent Event Data. New York: Springer), and the other is panel-count data (Zhao and others, 2010. Nonparametric inference based on panel-count data. Test 20: , 1-42). In the former case, all study subjects are monitored continuously; thus, complete information is available for the underlying recurrent-event processes of interest. In the latter case, study subjects are monitored periodically; thus, only incomplete information is available for the processes of interest. In reality, however, a third type of data could occur in which some study subjects are monitored continuously, but others are monitored periodically. When this occurs, we have mixed recurrent-event and panel-count data. This paper discusses regression analysis of such mixed data and presents two estimation procedures for the problem. One is a maximum likelihood estimation procedure, and the other is an estimating equation procedure. The asymptotic properties of both resulting estimators of regression parameters are established. Also, the methods are applied to a set of mixed recurrent-event and panel-count data that arose from a Childhood Cancer Survivor Study and motivated this investigation. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Rectangular maximum-volume submatrices and their applications

    KAUST Repository

    Mikhalev, Aleksandr; Oseledets, I.V.

    2017-01-01

    We introduce a definition of the volume of a general rectangular matrix, which is equivalent to an absolute value of the determinant for square matrices. We generalize results of square maximum-volume submatrices to the rectangular case, show a connection of the rectangular volume with an optimal experimental design and provide estimates for a growth of coefficients and an approximation error in spectral and Chebyshev norms. Three promising applications of such submatrices are presented: recommender systems, finding maximal elements in low-rank matrices and preconditioning of overdetermined linear systems. The code is available online.

  2. Rectangular maximum-volume submatrices and their applications

    KAUST Repository

    Mikhalev, Aleksandr

    2017-10-18

    We introduce a definition of the volume of a general rectangular matrix, which is equivalent to an absolute value of the determinant for square matrices. We generalize results of square maximum-volume submatrices to the rectangular case, show a connection of the rectangular volume with an optimal experimental design and provide estimates for a growth of coefficients and an approximation error in spectral and Chebyshev norms. Three promising applications of such submatrices are presented: recommender systems, finding maximal elements in low-rank matrices and preconditioning of overdetermined linear systems. The code is available online.

  3. Temperature and SAR measurement errors in the evaluation of metallic linear structures heating during MRI using fluoroptic (registered) probes

    Energy Technology Data Exchange (ETDEWEB)

    Mattei, E [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Triventi, M [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Calcagnini, G [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Censi, F [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Kainz, W [Center for Devices and Radiological Health, Food and Drug Administration, Rockville, MD (United States); Bassen, H I [Center for Devices and Radiological Health, Food and Drug Administration, Rockville, MD (United States); Bartolini, P [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy)

    2007-03-21

    The purpose of this work is to evaluate the error associated with temperature and SAR measurements using fluoroptic (registered) temperature probes on pacemaker (PM) leads during magnetic resonance imaging (MRI). We performed temperature measurements on pacemaker leads, excited with a 25, 64, and 128 MHz current. The PM lead tip heating was measured with a fluoroptic (registered) thermometer (Luxtron, Model 3100, USA). Different contact configurations between the pigmented portion of the temperature probe and the PM lead tip were investigated to find the contact position minimizing the temperature and SAR underestimation. A computer model was used to estimate the error made by fluoroptic (registered) probes in temperature and SAR measurement. The transversal contact of the pigmented portion of the temperature probe and the PM lead tip minimizes the underestimation for temperature and SAR. This contact position also has the lowest temperature and SAR error. For other contact positions, the maximum temperature error can be as high as -45%, whereas the maximum SAR error can be as high as -54%. MRI heating evaluations with temperature probes should use a contact position minimizing the maximum error, need to be accompanied by a thorough uncertainty budget and the temperature and SAR errors should be specified.

  4. Effects of the thickness of gold deposited on a source backing film in the 4πβ-counting

    International Nuclear Information System (INIS)

    Miyahara, Hiroshi; Yoshida, Makoto; Watanabe, Tamaki

    1976-01-01

    A gold deposited VYNS film as a source backing in the 4πβ-counting has generally been used for reducing the absorption of β-rays. The thickness of the film with the gold is usually a few times thicker than the VYNS film itself. However, Because the appropriate thickness of gold has not yet been determined, the effects of gold thickness on electrical resistivity, plateau characteristics and β-ray counting efficiency were studied. 198 Au (960 keV), 60 Co(315 keV), 59 Fe(273 keV) and 95 Nb(160 keV), which were prepared as sources by the aluminium chloride treatment method, were used. Gold was evaporated under a deposition rate of 1 - 5 μg/cm 2 /min at a pressure less than 1 x 10 -5 Torr. Results show that the gold deposition on the side opposite the source after source preparation is essential. In this case, a maximum counting efficiency is obtained at the mean thickness of 2 μg/cm 2 . When gold is deposited only on the same side as the source, a maximum counting efficiency, which is less than that in the former case, is obtained at the mean thickness of 20 μg/cm 2 . (Evans, J.)

  5. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  6. Artificial neural network-aided image analysis system for cell counting.

    Science.gov (United States)

    Sjöström, P J; Frydel, B R; Wahlberg, L U

    1999-05-01

    In histological preparations containing debris and synthetic materials, it is difficult to automate cell counting using standard image analysis tools, i.e., systems that rely on boundary contours, histogram thresholding, etc. In an attempt to mimic manual cell recognition, an automated cell counter was constructed using a combination of artificial intelligence and standard image analysis methods. Artificial neural network (ANN) methods were applied on digitized microscopy fields without pre-ANN feature extraction. A three-layer feed-forward network with extensive weight sharing in the first hidden layer was employed and trained on 1,830 examples using the error back-propagation algorithm on a Power Macintosh 7300/180 desktop computer. The optimal number of hidden neurons was determined and the trained system was validated by comparison with blinded human counts. System performance at 50x and lO0x magnification was evaluated. The correlation index at 100x magnification neared person-to-person variability, while 50x magnification was not useful. The system was approximately six times faster than an experienced human. ANN-based automated cell counting in noisy histological preparations is feasible. Consistent histology and computer power are crucial for system performance. The system provides several benefits, such as speed of analysis and consistency, and frees up personnel for other tasks.

  7. Impact and quantification of the sources of error in DNA pooling designs.

    Science.gov (United States)

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  8. A Lossy Counting-Based State of Charge Estimation Method and Its Application to Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2015-12-01

    Full Text Available Estimating the residual capacity or state-of-charge (SoC of commercial batteries on-line without destroying them or interrupting the power supply, is quite a challenging task for electric vehicle (EV designers. Many Coulomb counting-based methods have been used to calculate the remaining capacity in EV batteries or other portable devices. The main disadvantages of these methods are the cumulative error and the time-varying Coulombic efficiency, which are greatly influenced by the operating state (SoC, temperature and current. To deal with this problem, we propose a lossy counting-based Coulomb counting method for estimating the available capacity or SoC. The initial capacity of the tested battery is obtained from the open circuit voltage (OCV. The charging/discharging efficiencies, used for compensating the Coulombic losses, are calculated by the lossy counting-based method. The measurement drift, resulting from the current sensor, is amended with the distorted Coulombic efficiency matrix. Simulations and experimental results show that the proposed method is both effective and convenient.

  9. Analysis of overdispersed count data: application to the Human Papillomavirus Infection in Men (HIM) Study.

    Science.gov (United States)

    Lee, J-H; Han, G; Fulp, W J; Giuliano, A R

    2012-06-01

    The Poisson model can be applied to the count of events occurring within a specific time period. The main feature of the Poisson model is the assumption that the mean and variance of the count data are equal. However, this equal mean-variance relationship rarely occurs in observational data. In most cases, the observed variance is larger than the assumed variance, which is called overdispersion. Further, when the observed data involve excessive zero counts, the problem of overdispersion results in underestimating the variance of the estimated parameter, and thus produces a misleading conclusion. We illustrated the use of four models for overdispersed count data that may be attributed to excessive zeros. These are Poisson, negative binomial, zero-inflated Poisson and zero-inflated negative binomial models. The example data in this article deal with the number of incidents involving human papillomavirus infection. The four models resulted in differing statistical inferences. The Poisson model, which is widely used in epidemiology research, underestimated the standard errors and overstated the significance of some covariates.

  10. Standardization of 241Am by digital coincidence counting, liquid scintillation counting and defined solid angle counting

    International Nuclear Information System (INIS)

    Balpardo, C.; Capoulat, M.E.; Rodrigues, D.; Arenillas, P.

    2010-01-01

    The nuclide 241 Am decays by alpha emission to 237 Np. Most of the decays (84.6%) populate the excited level of 237 Np with energy of 59.54 keV. Digital coincidence counting was applied to standardize a solution of 241 Am by alpha-gamma coincidence counting with efficiency extrapolation. Electronic discrimination was implemented with a pressurized proportional counter and the results were compared with two other independent techniques: Liquid scintillation counting using the logical sum of double coincidences in a TDCR array and defined solid angle counting taking into account activity inhomogeneity in the active deposit. The results show consistency between the three methods within a limit of a 0.3%. An ampoule of this solution will be sent to the International Reference System (SIR) during 2009. Uncertainties were analysed and compared in detail for the three applied methods.

  11. Benjamin Thompson, Count Rumford Count Rumford on the nature of heat

    CERN Document Server

    Brown, Sanborn C

    1967-01-01

    Men of Physics: Benjamin Thompson - Count Rumford: Count Rumford on the Nature of Heat covers the significant contributions of Count Rumford in the fields of physics. Count Rumford was born with the name Benjamin Thompson on March 23, 1753, in Woburn, Massachusetts. This book is composed of two parts encompassing 11 chapters, and begins with a presentation of Benjamin Thompson's biography and his interest in physics, particularly as an advocate of an """"anti-caloric"""" theory of heat. The subsequent chapters are devoted to his many discoveries that profoundly affected the physical thought

  12. Evaluation of bias and variance in low-count OSEM list mode reconstruction

    International Nuclear Information System (INIS)

    Jian, Y; Carson, R E; Planeta, B

    2015-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([ 11 C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. (paper)

  13. Fast radio burst event rate counts - I. Interpreting the observations

    Science.gov (United States)

    Macquart, J.-P.; Ekers, R. D.

    2018-02-01

    The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.

  14. MEASURING PRIMORDIAL NON-GAUSSIANITY THROUGH WEAK-LENSING PEAK COUNTS

    International Nuclear Information System (INIS)

    Marian, Laura; Hilbert, Stefan; Smith, Robert E.; Schneider, Peter; Desjacques, Vincent

    2011-01-01

    We explore the possibility of detecting primordial non-Gaussianity of the local type using weak-lensing peak counts. We measure the peak abundance in sets of simulated weak-lensing maps corresponding to three models f NL = 0, - 100, and 100. Using survey specifications similar to those of EUCLID and without assuming any knowledge of the lens and source redshifts, we find the peak functions of the non-Gaussian models with f NL = ±100 to differ by up to 15% from the Gaussian peak function at the high-mass end. For the assumed survey parameters, the probability of fitting an f NL = 0 peak function to the f NL = ±100 peak functions is less than 0.1%. Assuming the other cosmological parameters are known, f NL can be measured with an error Δf NL ∼ 13. It is therefore possible that future weak-lensing surveys like EUCLID and LSST may detect primordial non-Gaussianity from the abundance of peak counts, and provide information complementary to that obtained from the cosmic microwave background.

  15. On the mean squared error of the ridge estimator of the covariance and precision matrix

    NARCIS (Netherlands)

    van Wieringen, Wessel N.

    2017-01-01

    For a suitably chosen ridge penalty parameter, the ridge regression estimator uniformly dominates the maximum likelihood regression estimator in terms of the mean squared error. Analogous results for the ridge maximum likelihood estimators of covariance and precision matrix are presented.

  16. Verification of surface minimum, mean, and maximum temperature forecasts in Calabria for summer 2008

    Directory of Open Access Journals (Sweden)

    S. Federico

    2011-02-01

    Full Text Available Since 2005, one-hour temperature forecasts for the Calabria region (southern Italy, modelled by the Regional Atmospheric Modeling System (RAMS, have been issued by CRATI/ISAC-CNR (Consortium for Research and Application of Innovative Technologies/Institute for Atmospheric and Climate Sciences of the National Research Council and are available online at http://meteo.crati.it/previsioni.html (every six hours. Beginning in June 2008, the horizontal resolution was enhanced to 2.5 km. In the present paper, forecast skill and accuracy are evaluated out to four days for the 2008 summer season (from 6 June to 30 September, 112 runs. For this purpose, gridded high horizontal resolution forecasts of minimum, mean, and maximum temperatures are evaluated against gridded analyses at the same horizontal resolution (2.5 km.

    Gridded analysis is based on Optimal Interpolation (OI and uses the RAMS first-day temperature forecast as the background field. Observations from 87 thermometers are used in the analysis system. The analysis error is introduced to quantify the effect of using the RAMS first-day forecast as the background field in the OI analyses and to define the forecast error unambiguously, while spatial interpolation (SI analysis is considered to quantify the statistics' sensitivity to the verifying analysis and to show the quality of the OI analyses for different background fields.

    Two case studies, the first one with a low (less than the 10th percentile root mean square error (RMSE in the OI analysis, the second with the largest RMSE of the whole period in the OI analysis, are discussed to show the forecast performance under two different conditions. Cumulative statistics are used to quantify forecast errors out to four days. Results show that maximum temperature has the largest RMSE, while minimum and mean temperature errors are similar. For the period considered

  17. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    Energy Technology Data Exchange (ETDEWEB)

    Boehnke, E McKenzie; DeMarco, J; Steers, J; Fraass, B [Cedars-Sinai Medical Center, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readings are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.

  18. Statistics and error considerations at the application of SSND T-technique in radon measurement

    International Nuclear Information System (INIS)

    Jonsson, G.

    1993-01-01

    Plastic films are used for the detection of alpha particles from disintegrating radon and radon daughter nuclei. After etching there are tracks (cones) or holes in the film as a result of the exposure. The step from a counted number of tracks/holes per surface unit of the film to a reliable value of the radon and radon daughter level is surrounded by statistical considerations of different nature. Some of them are the number of counted tracks, the length of the time of exposure, the season of the time of exposure, the etching technique and the method of counting the tracks or holes. The number of background tracks of an unexposed film increases the error of the measured radon level. Some of the mentioned effects of statistical nature will be discussed in the report. (Author)

  19. Resolving time of scintillation camera-computer system and methods of correction for counting loss, 2

    International Nuclear Information System (INIS)

    Iinuma, Takeshi; Fukuhisa, Kenjiro; Matsumoto, Toru

    1975-01-01

    Following the previous work, counting-rate performance of camera-computer systems was investigated for two modes of data acquisition. The first was the ''LIST'' mode in which image data and timing signals were sequentially stored on magnetic disk or tape via a buffer memory. The second was the ''HISTOGRAM'' mode in which image data were stored in a core memory as digital images and then the images were transfered to magnetic disk or tape by the signal of frame timing. Firstly, the counting-rates stored in the buffer memory was measured as a function of display event-rates of the scintillation camera for the two modes. For both modes, stored counting-rated (M) were expressed by the following formula: M=N(1-Ntau) where N was the display event-rates of the camera and tau was the resolving time including analog-to-digital conversion time and memory cycle time. The resolving time for each mode may have been different, but it was about 10 μsec for both modes in our computer system (TOSBAC 3400 model 31). Secondly, the date transfer speed from the buffer memory to the external memory such as magnetic disk or tape was considered for the two modes. For the ''LIST'' mode, the maximum value of stored counting-rates from the camera was expressed in terms of size of the buffer memory, access time and data transfer-rate of the external memory. For the ''HISTOGRAM'' mode, the minimum time of the frame was determined by size of the buffer memory, access time and transfer rate of the external memory. In our system, the maximum value of stored counting-rates were about 17,000 counts/sec. with the buffer size of 2,000 words, and minimum frame time was about 130 msec. with the buffer size of 1024 words. These values agree well with the calculated ones. From the author's present analysis, design of the camera-computer system becomes possible for quantitative dynamic imaging and future improvements are suggested. (author)

  20. ChromAIX2: A large area, high count-rate energy-resolving photon counting ASIC for a Spectral CT Prototype

    Science.gov (United States)

    Steadman, Roger; Herrmann, Christoph; Livne, Amir

    2017-08-01

    Spectral CT based on energy-resolving photon counting detectors is expected to deliver additional diagnostic value at a lower dose than current state-of-the-art CT [1]. The capability of simultaneously providing a number of spectrally distinct measurements not only allows distinguishing between photo-electric and Compton interactions but also discriminating contrast agents that exhibit a K-edge discontinuity in the absorption spectrum, referred to as K-edge Imaging [2]. Such detectors are based on direct converting sensors (e.g. CdTe or CdZnTe) and high-rate photon counting electronics. To support the development of Spectral CT and show the feasibility of obtaining rates exceeding 10 Mcps/pixel (Poissonian observed count-rate), the ChromAIX ASIC has been previously reported showing 13.5 Mcps/pixel (150 Mcps/mm2 incident) [3]. The ChromAIX has been improved to offer the possibility of a large area coverage detector, and increased overall performance. The new ASIC is called ChromAIX2, and delivers count-rates exceeding 15 Mcps/pixel with an rms-noise performance of approximately 260 e-. It has an isotropic pixel pitch of 500 μm in an array of 22×32 pixels and is tile-able on three of its sides. The pixel topology consists of a two stage amplifier (CSA and Shaper) and a number of test features allowing to thoroughly characterize the ASIC without a sensor. A total of 5 independent thresholds are also available within each pixel, allowing to acquire 5 spectrally distinct measurements simultaneously. The ASIC also incorporates a baseline restorer to eliminate excess currents induced by the sensor (e.g. dark current and low frequency drifts) which would otherwise cause an energy estimation error. In this paper we report on the inherent electrical performance of the ChromAXI2 as well as measurements obtained with CZT (CdZnTe)/CdTe sensors and X-rays and radioactive sources.

  1. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  2. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  4. Counting cormorants

    DEFF Research Database (Denmark)

    Bregnballe, Thomas; Carss, David N; Lorentsen, Svein-Håkon

    2013-01-01

    This chapter focuses on Cormorant population counts for both summer (i.e. breeding) and winter (i.e. migration, winter roosts) seasons. It also explains differences in the data collected from undertaking ‘day’ versus ‘roost’ counts, gives some definitions of the term ‘numbers’, and presents two...

  5. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  6. Short communication: Repeatability of differential goat bulk milk culture and associations with somatic cell count, total bacterial count, and standard plate count.

    Science.gov (United States)

    Koop, G; Dik, N; Nielen, M; Lipman, L J A

    2010-06-01

    The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms, 3 bulk milk samples were collected at intervals of 2 wk. The samples were cultured for SPC, coliform count, and staphylococcal count and for the presence of Staphylococcus aureus. Furthermore, SCC (Fossomatic 5000, Foss, Hillerød, Denmark) and TBC (BactoScan FC 150, Foss) were measured. Staphylococcal count was correlated to SCC (r=0.40), TBC (r=0.51), and SPC (r=0.53). Coliform count was correlated to TBC (r=0.33), but not to any of the other variables. Staphylococcus aureus did not correlate to SCC. The contribution of the staphylococcal count to the SPC was 31%, whereas the coliform count comprised only 1% of the SPC. The agreement of the repeated measurements was low. This study indicates that staphylococci in goat bulk milk are related to SCC and make a significant contribution to SPC. Because of the high variation in bacterial counts, repeated sampling is necessary to draw valid conclusions from bulk milk culturing. 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper

    2015-01-07

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.

  8. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www. science direct.com/ science /article/pii/S0377042716301492?via%3Dihub

  9. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716301492?via%3Dihub

  10. Preparation of source mounts for 4π counting

    International Nuclear Information System (INIS)

    Johnson, E.P.

    1991-01-01

    The 4πβ/γ counter in the ANSTO radioisotope standards laboratory at Lucas Heights constitutes part of the Australian national standard for radioactivity. Sources to be measured in the counter must be mounted on a substrate which is strong enough to withstand careful handling and transport. The substrate must also be electrically conducting to minimise counting errors caused by charging of the source, and it must have very low superficial density so that little or none of the radiation is absorbed. The entire process of fabrication of VYNS films, coating them with gold/palladium and transferring them to source mount rings, as carried out in the radioisotope standards laboratory, is documented. 3 refs., 2 tabs., 6 figs

  11. Analysis of error functions in speckle shearing interferometry

    International Nuclear Information System (INIS)

    Wan Saffiey Wan Abdullah

    2001-01-01

    Electronic Speckle Pattern Shearing Interferometry (ESPSI) or shearography has successfully been used in NDT for slope (∂w/ (∂x and / or (∂w/ (∂y) measurement while strain measurement (∂u/ ∂x, ∂v/ ∂y, ∂u/ ∂y and (∂v/ (∂x) is still under investigation. This method is well accepted in industrial applications especially in the aerospace industry. Demand of this method is increasing due to complexity of the test materials and objects. ESPSI has successfully performed in NDT only for qualitative measurement whilst quantitative measurement is the current aim of many manufacturers. Industrial use of such equipment is being completed without considering the errors arising from numerous sources, including wavefront divergence. The majority of commercial systems are operated with diverging object illumination wave fronts without considering the curvature of the object illumination wavefront or the object geometry, when calculating the interferometer fringe function and quantifying data. This thesis reports the novel approach in quantified maximum phase change difference analysis for derivative out-of-plane (OOP) and in-plane (IP) cases that propagate from the divergent illumination wavefront compared to collimated illumination. The theoretical of maximum phase difference is formulated by means of three dependent variables, these being the object distance, illuminated diameter, center of illuminated area and camera distance and illumination angle. The relative maximum phase change difference that may contributed to the error in the measurement analysis in this scope of research is defined by the difference of maximum phase difference value measured by divergent illumination wavefront relative to the maximum phase difference value of collimated illumination wavefront, taken at the edge of illuminated area. Experimental validation using test objects for derivative out-of-plane and derivative in-plane deformation, using a single illumination wavefront

  12. Visual error augmentation enhances learning in three dimensions.

    Science.gov (United States)

    Sharp, Ian; Huang, Felix; Patton, James

    2011-09-02

    Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  13. Visual error augmentation enhances learning in three dimensions

    Directory of Open Access Journals (Sweden)

    Huang Felix

    2011-09-01

    Full Text Available Abstract Because recent preliminary evidence points to the use of Error augmentation (EA for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed. Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  14. Conically scanning lidar error in complex terrain

    Directory of Open Access Journals (Sweden)

    Ferhat Bingöl

    2009-05-01

    Full Text Available Conically scanning lidars assume the flow to be homogeneous in order to deduce the horizontal wind speed. However, in mountainous or complex terrain this assumption is not valid implying a risk that the lidar will derive an erroneous wind speed. The magnitude of this error is measured by collocating a meteorological mast and a lidar at two Greek sites, one hilly and one mountainous. The maximum error for the sites investigated is of the order of 10 %. In order to predict the error for various wind directions the flows at both sites are simulated with the linearized flow model, WAsP Engineering 2.0. The measurement data are compared with the model predictions with good results for the hilly site, but with less success at the mountainous site. This is a deficiency of the flow model, but the methods presented in this paper can be used with any flow model.

  15. Characteristic performance evaluation of a photon counting Si strip detector for low dose spectral breast CT imaging

    Science.gov (United States)

    Cho, Hyo-Min; Barber, William C.; Ding, Huanjun; Iwanczyk, Jan S.; Molloi, Sabee

    2014-01-01

    Purpose: The possible clinical applications which can be performed using a newly developed detector depend on the detector's characteristic performance in a number of metrics including the dynamic range, resolution, uniformity, and stability. The authors have evaluated a prototype energy resolved fast photon counting x-ray detector based on a silicon (Si) strip sensor used in an edge-on geometry with an application specific integrated circuit to record the number of x-rays and their energies at high flux and fast frame rates. The investigated detector was integrated with a dedicated breast spectral computed tomography (CT) system to make use of the detector's high spatial and energy resolution and low noise performance under conditions suitable for clinical breast imaging. The aim of this article is to investigate the intrinsic characteristics of the detector, in terms of maximum output count rate, spatial and energy resolution, and noise performance of the imaging system. Methods: The maximum output count rate was obtained with a 50 W x-ray tube with a maximum continuous output of 50 kVp at 1.0 mA. A109Cd source, with a characteristic x-ray peak at 22 keV from Ag, was used to measure the energy resolution of the detector. The axial plane modulation transfer function (MTF) was measured using a 67 μm diameter tungsten wire. The two-dimensional (2D) noise power spectrum (NPS) was measured using flat field images and noise equivalent quanta (NEQ) were calculated using the MTF and NPS results. The image quality parameters were studied as a function of various radiation doses and reconstruction filters. The one-dimensional (1D) NPS was used to investigate the effect of electronic noise elimination by varying the minimum energy threshold. Results: A maximum output count rate of 100 million counts per second per square millimeter (cps/mm2) has been obtained (1 million cps per 100 × 100 μm pixel). The electrical noise floor was less than 4 keV. The energy resolution

  16. Statistical analysis of lifetime determinations in the presence of large errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1984-01-01

    The lifetimes of the new particles are very short, and most of the experiments which measure decay times are subject to measurement errors which are not negligible compared with the decay times themselves. Bartlett has analyzed the problem of lifetime estimation if the error on each event is small or zero. For the case of non-negligible measurement errors, σsub(i), on each event, we are interested in a few basic questions: How well does maximum likelihood work. That is, (a) are the errors reasonable, (b) is the answer unbiased, and (c) are there other estimators with superior performance. We concentrate on the results of our Monte Carlo investigation for the case in which the experiment is sensitive over all times -infinity< xsub(i)< infinity

  17. Error Immune Logic for Low-Power Probabilistic Computing

    Directory of Open Access Journals (Sweden)

    Bo Marr

    2010-01-01

    design for the maximum amount of energy savings per a given error rate. Spice simulation results using a commercially available and well-tested 0.25 μm technology are given verifying the ultra-low power, probabilistic full-adder designs. Further, close to 6X energy savings is achieved for a probabilistic full-adder over the deterministic case.

  18. A new method of quench monitoring in liquid scintillation counting

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1978-01-01

    The quench level of different liquid scintillation counting samples is measured by comparing the responses (pulse heights) produced by the same energy electrons in each sample. The electrons utilized in the measurements are those of the maximum energy (Esub(max)) which are produced by the single Compton scattering process for the same energy gamma-rays in each sample. The Esub(max) response produced in any sample is related to the Esub(max) response produced in an unquenched, sealed standard. The difference in response on a logarithm response scale is defined as the ''H Number''. The H number is related to the counting efficiency of the desired radionuclide by measurement of a set of standards of known amounts of the radionuclide and different amounts of quench (standard quench curve). The concept of the H number has been shown to be theoretically valid. Based upon this proof, the features of the H number concept as embodied in the Beckman LS-8000 Series Liquid Scintillation Systems have been demonstrated. It has been shown that one H number is unique; it provides a method of instrument calibration and wide dynamic quench range measurements. Further, it has been demonstrated that the H number concept provides a universal quench parameter. Counting efficiency vs. H number plots are repeatable within the statistical limits of +-1% counting efficiency. By the use of the H number concept a very accurate method of automatic quench compensation (A.Q.C.) is possible. (T.G.)

  19. Repeatability of differential goat bulk milk culture and associations with somatic cell count, total bacterial count, and standard plate count

    NARCIS (Netherlands)

    Koop, G.; Dik, N.; Nielen, M.; Lipman, L.J.A.

    2010-01-01

    The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms,

  20. Reduction in sperm count and increase in abnormal sperm in the mouse following x-irradiation or injection of 22Na

    International Nuclear Information System (INIS)

    Harrison, A.; Moore, P.C.

    1980-01-01

    The accumulated reduction in sperm count and increase in the number of aberrant sperm in the mouse were used to compare acute X-irradiation with protracted radiation from an injection of 22 Na. The effects per unit of absorbed dose of acute and protracted radiation were similar on sperm count, but in respect of numbers of abnormal sperm the 22 Na induced a significantly greater maximum than X-rays. (author)

  1. Accommodating error analysis in comparison and clustering of molecular fingerprints.

    Science.gov (United States)

    Salamon, H; Segal, M R; Ponce de Leon, A; Small, P M

    1998-01-01

    Molecular epidemiologic studies of infectious diseases rely on pathogen genotype comparisons, which usually yield patterns comprising sets of DNA fragments (DNA fingerprints). We use a highly developed genotyping system, IS6110-based restriction fragment length polymorphism analysis of Mycobacterium tuberculosis, to develop a computational method that automates comparison of large numbers of fingerprints. Because error in fragment length measurements is proportional to fragment length and is positively correlated for fragments within a lane, an align-and-count method that compensates for relative scaling of lanes reliably counts matching fragments between lanes. Results of a two-step method we developed to cluster identical fingerprints agree closely with 5 years of computer-assisted visual matching among 1,335 M. tuberculosis fingerprints. Fully documented and validated methods of automated comparison and clustering will greatly expand the scope of molecular epidemiology.

  2. Novel Photon-Counting Detectors for Free-Space Communication

    Science.gov (United States)

    Krainak, Michael A.; Yang, Guan; Sun, Xiaoli; Lu, Wei; Merritt, Scott; Beck, Jeff

    2016-01-01

    We present performance data for novel photon counting detectors for free space optical communication. NASA GSFC is testing the performance of three novel photon counting detectors 1) a 2x8 mercury cadmium telluride avalanche array made by DRS Inc. 2) a commercial 2880 silicon avalanche photodiode array and 3) a prototype resonant cavity silicon avalanche photodiode array. We will present and compare dark count, photon detection efficiency, wavelength response and communication performance data for these detectors. We discuss system wavelength trades and architectures for optimizing overall communication link sensitivity, data rate and cost performance. The HgCdTe APD array has photon detection efficiencies of greater than 50 were routinely demonstrated across 5 arrays, with one array reaching a maximum PDE of 70. High resolution pixel-surface spot scans were performed and the junction diameters of the diodes were measured. The junction diameter was decreased from 31 m to 25 m resulting in a 2x increase in e-APD gain from 470 on the 2010 array to 1100 on the array delivered to NASA GSFC. Mean single photon SNRs of over 12 were demonstrated at excess noise factors of 1.2-1.3.The commercial silicon APD array has a fast output with rise times of 300ps and pulse widths of 600ps. Received and filtered signals from the entire array are multiplexed onto this single fast output. The prototype resonant cavity silicon APD array is being developed for use at 1 micron wavelength.

  3. Standardization of {sup 241}Am by digital coincidence counting, liquid scintillation counting and defined solid angle counting

    Energy Technology Data Exchange (ETDEWEB)

    Balpardo, C., E-mail: balpardo@cae.cnea.gov.a [Laboratorio de Metrologia de Radioisotopos, CNEA, Buenos Aires (Argentina); Capoulat, M.E.; Rodrigues, D.; Arenillas, P. [Laboratorio de Metrologia de Radioisotopos, CNEA, Buenos Aires (Argentina)

    2010-07-15

    The nuclide {sup 241}Am decays by alpha emission to {sup 237}Np. Most of the decays (84.6%) populate the excited level of {sup 237}Np with energy of 59.54 keV. Digital coincidence counting was applied to standardize a solution of {sup 241}Am by alpha-gamma coincidence counting with efficiency extrapolation. Electronic discrimination was implemented with a pressurized proportional counter and the results were compared with two other independent techniques: Liquid scintillation counting using the logical sum of double coincidences in a TDCR array and defined solid angle counting taking into account activity inhomogeneity in the active deposit. The results show consistency between the three methods within a limit of a 0.3%. An ampoule of this solution will be sent to the International Reference System (SIR) during 2009. Uncertainties were analysed and compared in detail for the three applied methods.

  4. Fast imaging by photon counting application to long-baseline optical stellar interferometry

    International Nuclear Information System (INIS)

    Morel, Sebastien

    1998-01-01

    Image acquisition by photon counting in the visible spectrum with a high precision on photo-events dating is especially useful for ground-based observations. In the first part of this thesis, and after a review of several techniques for photon acquisition and processing, I introduce a new type of photon counting camera, noticeable for its high temporal resolution and its high maximum counting rate: the DELTA (Detector Enhancement by Linear-projections on Three Axes) camera. I describe the concept of this camera, and the engineering solutions (optics, electronics, computing) that could be used for its construction. The second part of my work regards fringe detection and tracking in ground-based and long- baseline optical stellar interferometry. After a statistical approach of the issue, I describe methods introducing a priori information in the data, in order to have a better detection efficiency. One of the proposed methods, using a priori information on the atmospheric piston, requires a precise photo-event dating, and therefore uses methods described in the first part. (author) [fr

  5. Application of Joint Error Maximal Mutual Compensation to hexapod robots

    DEFF Research Database (Denmark)

    Veryha, Yauheni; Petersen, Henrik Gordon

    2008-01-01

    A good practice to ensure high-positioning accuracy in industrial robots is to use joint error maximum mutual compensation (JEMMC). This paper presents an application of JEMMC for positioning of hexapod robots to improve end-effector positioning accuracy. We developed an algorithm and simulation ...

  6. The impact of the cycle counting in the inventory accuracy: multiple cases in industries of Paraná

    Directory of Open Access Journals (Sweden)

    Everton Drohomeretski

    2013-05-01

    Full Text Available This article aims to identify the impact of cycle counting on inventory accuracy. Multiple case studies were used as research method; these include seven case studies of companies in Paraná. A research protocol was used as the basis for collecting the data. For the data analysis, the content analysis method was used with the triangulation of collected data. For the main results, the study demonstrates the relationship between cycle counting, the level of importance attributed by the organization, the number of items and the rate of accuracy obtained. The high level of control of inventory processes, together with the use of cycle counting made it possible to eliminate the main cause of failure in inventory accuracy – errors in recording the movement of material – and with this, improve the operational efficiency of the companies.

  7. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  8. Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...

  9. A simulation study of high-resolution x-ray computed tomography imaging using irregular sampling with a photon-counting detector

    International Nuclear Information System (INIS)

    Lee, Seungwan; Choi, Yu-Na; Kim, Hee-Joung

    2013-01-01

    The purpose of this study was to improve the spatial resolution for the x-ray computed tomography (CT) imaging with a photon-counting detector using an irregular sampling method. The geometric shift-model of detector was proposed to produce the irregular sampling pattern and increase the number of samplings in the radial direction. The conventional micro-x-ray CT system and the novel system with the geometric shift-model of detector were simulated using analytic and Monte Carlo simulations. The projections were reconstructed using filtered back-projection (FBP), algebraic reconstruction technique (ART), and total variation (TV) minimization algorithms, and the reconstructed images were compared in terms of normalized root-mean-square error (NRMSE), full-width at half-maximum (FWHM), and coefficient-of-variation (COV). The results showed that the image quality improved in the novel system with the geometric shift-model of detector, and the NRMSE, FWHM, and COV were lower for the images reconstructed using the TV minimization technique in the novel system with the geometric shift-model of detector. The irregular sampling method produced by the geometric shift-model of detector can improve the spatial resolution and reduce artifacts and noise for reconstructed images obtained from an x-ray CT system with a photon-counting detector. -- Highlights: • We proposed a novel sampling method based on a spiral pattern to improve the spatial resolution. • The novel sampling method increased the number of samplings in the radial direction. • The spatial resolution was improved by the novel sampling method

  10. Maximum Power Point Tracking of Photovoltaic System for Traffic Light Application

    Directory of Open Access Journals (Sweden)

    Riza Muhida

    2013-07-01

    Full Text Available Photovoltaic traffic light system is a significant application of renewable energy source. The development of the system is an alternative effort of local authority to reduce expenditure for paying fees to power supplier which the power comes from conventional energy source. Since photovoltaic (PV modules still have relatively low conversion efficiency, an alternative control of maximum power point tracking (MPPT method is applied to the traffic light system. MPPT is intended to catch up the maximum power at daytime in order to charge the battery at the maximum rate in which the power from the battery is intended to be used at night time or cloudy day. MPPT is actually a DC-DC converter that can step up or down voltage in order to achieve the maximum power using Pulse Width Modulation (PWM control. From experiment, we obtained the voltage of operation using MPPT is at 16.454 V, this value has error of 2.6%, if we compared with maximum power point voltage of PV module that is 16.9 V. Based on this result it can be said that this MPPT control works successfully to deliver the power from PV module to battery maximally.

  11. Clean Hands Count

    Medline Plus

    Full Text Available ... Like this video? Sign in to make your opinion count. Sign in 131 2 Don't like this video? Sign in to make your opinion count. Sign in 3 Loading... Loading... Transcript The ...

  12. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  13. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  14. Effect of recirculation and regional counting rate on reliability of noninvasive bicompartmental CBF measurements

    International Nuclear Information System (INIS)

    Herholz, K.

    1985-01-01

    Based on data from routine intravenous Xe133-rCBF studies in 50 patients, using Obrist's algorithm the effect of counting rate statistics and amount of recirculating activity on reproducibility of results was investigated at five simulated counting rate levels. Dependence of the standard deviation of compartmental and noncompartmental flow parameters on recirculation and counting rate was determined by multiple linear regression analysis. Those regression equations permit determination of the optimum accuracy that may be expected from individual flow measurements. Mainly due to a delay of the start-of-fit time an exponential increase in standard deviation of flow measurements was observed as recirculation increased. At constant start-of-fit, however, a linear increase in standard deviation of compartmental flow parameters only was found, while noncompartmental results remained constant. Therefore, and in regard to other studies of potential sources of error, an upper limit of 2.5 min for the start-of-fit time and usage of noncompartmental flow parameters for measurements affected by high recirculation are suggested

  15. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  16. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  17. Propagation of angular errors in two-axis rotation systems

    Science.gov (United States)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  18. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. TH-CD-207B-07: Noise Modeling of Single Photon Avalanche Diode (SPAD) for Photon Counting CT Applications

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Z; Zheng, X; Deen, J; Peng, H [McMaster University, Hamilton, ON (Canada); Xing, L [Stanford University School of Medicine, Stanford, CA (United States)

    2016-06-15

    Purpose: Silicon photomultiplier (SiPM) has recently emerged as a promising photodetector for biomedical imaging applications. Due to its high multiplication gain (comparable to PMT), fast timing, low cost and compactness, it is considered a good candidate for photon counting CT. Dark noise is a limiting factor which impacts both energy resolution and detection dynamic range. Our goal is to develop a comprehensive model for noise sources for SiPM sensors. Methods: The physical parameters used in this work were based upon a test SPAD fabricated in 130nm CMOS process. The SPAD uses an n+/p-well junction, which is isolated from the p-substrate by a deep n-well junction. Inter-avalanche time measurement was used to record the time interval between two adjacent avalanche pulses. After collecting 1×106 counts, the histogram was obtained and multiple exponential fitting process was used to extract the lifetime associated with the traps within the bandgap. Results: At room temperature, the breakdown voltage of the SPAD is ∼11.4V and shows a temperature coefficient of 7.7mV/°C. The dark noise of SPAD increases with both the excess biasing voltage and temperature. The primary dark counts from the model were validated against the measurement results. A maximum relative error of 8.7% is observed at 20 °C with an excess voltage of 0.5V. The probabilities of after-pulsing are found to be dependent of both temperature and excess voltage. With 0.5V excess voltage, the after-pulsing probability is 63.5% at - 30 °C and drops to ∼6.6% at 40 °C. Conclusion: A comprehensive noise model for SPAD sensor was proposed. The model takes into account of static, dynamic and statistical behavior of SPADs. We believe that this is the first SPAD circuit simulation model that includes the band-to-band tunneling dark noise contribution and temporal dependence of the after-pulsing probability.

  20. Noun Countability; Count Nouns and Non-count Nouns, What are the Syntactic Differences Between them?

    Directory of Open Access Journals (Sweden)

    Azhar A. Alkazwini

    2016-11-01

    Full Text Available Words that function as the subjects of verbs, objects of verbs or prepositions and which can have a plural form and possessive ending are known as nouns. They are described as referring to persons, places, things, states, or qualities and might also be used as an attributive modifier. In this paper, classes and subclasses of nouns shall be presented, then, noun countability branching into count and non-count nous shall be discussed. A number of present examples illustrating differences between count and non-count nouns and this includes determiner-head-co-occurrence restrictions of number, subject-verb agreement, in addition to some exceptions to this agreement rule shall be discussed. Also, the lexically inherent number in nouns and how inherently plural nouns are classified in terms of (+/- count are illustrated. This research will discuss partitive construction of count and non-count nouns, nouns as attributive modifier and, finally, conclude with the fact that there are syntactic difference between count and non-count in the English Language.

  1. Probing the Cosmological Principle in the counts of radio galaxies at different frequencies

    Science.gov (United States)

    Bengaly, Carlos A. P.; Maartens, Roy; Santos, Mario G.

    2018-04-01

    According to the Cosmological Principle, the matter distribution on very large scales should have a kinematic dipole that is aligned with that of the CMB. We determine the dipole anisotropy in the number counts of two all-sky surveys of radio galaxies. For the first time, this analysis is presented for the TGSS survey, allowing us to check consistency of the radio dipole at low and high frequencies by comparing the results with the well-known NVSS survey. We match the flux thresholds of the catalogues, with flux limits chosen to minimise systematics, and adopt a strict masking scheme. We find dipole directions that are in good agreement with each other and with the CMB dipole. In order to compare the amplitude of the dipoles with theoretical predictions, we produce sets of lognormal realisations. Our realisations include the theoretical kinematic dipole, galaxy clustering, Poisson noise, simulated redshift distributions which fit the NVSS and TGSS source counts, and errors in flux calibration. The measured dipole for NVSS is ~2 times larger than predicted by the mock data. For TGSS, the dipole is almost ~ 5 times larger than predicted, even after checking for completeness and taking account of errors in source fluxes and in flux calibration. Further work is required to understand the nature of the systematics that are the likely cause of the anomalously large TGSS dipole amplitude.

  2. A novel simultaneous streak and framing camera without principle errors

    Science.gov (United States)

    Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.

    2018-02-01

    A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.

  3. New counting circuits with E1T tubes; Nouveaux circuits de comptage a tubes E1T; Novye schetnye kontury s lampami E1T; Nuevos circuitos de contaje con valvulas E1T

    Energy Technology Data Exchange (ETDEWEB)

    Radeka, V [Institut Rudjer Boskovic, Zagreb, Yugoslavia (Croatia)

    1962-04-15

    New solutions for beam-deflection circuits are given, which result in simple and reliable counting circuits. The requirements on the accuracy of the beam deflection are derived from the theoretical investigation of the counting process published previously by the author. The decrease in deflection error limits resulting from the difference between real and idealized tube characteristics is calculated. It is shown that, up to about 3 x 10{sup 5} pulses per second, the deflection error is maximum and independent of the counting speed, enabling simpler circuits to be designed. More accurate deflection circuits are needed for counting up to 10{sup 6} pulses per second. Two circuits for use up to these counting speed limits are presented. The requirements put on circuit components regarding stability and accuracy are low. Only an unstabilized DC supply voltage is needed. (author) [French] L'auteur propose de nouvelles solutions pour les circuits a faisceaux electroniques diriges, grace auxquelles on peut obtenir des circuits de comptage simples et surs. Les conditions de precision de la deflexion des faisceaux ont ete determinees au cours de recherches theoriques sur le processus de comptage, dont les resultats ont deja ete publies. L'auteur calcule la diminution de l'erreur de deflexion admissible qui resulte de la difference entre les caracteristiques du tube reel et du tube ideal. Il montre que l'erreur de deflexion admissible atteint son maximum jusque dans la region des 3x10{sup 5} impulsions par seconde, quelle que soit la vitesse de comptage, ce qui permet de concevoir des circuits plus simples. Une plus grande precision dans la deflexion est necessaire si l'on veut compter jusqu'a 10{sup 6} impulsions par seconde. L'auteur presente deux circuits pouvant etre utilises jusqu'a ces vitesses de comptage. Les conditions de precision et de stabilite imposees aux elements constitutifs du circuit sont peu rigoureuses; il suffit en outre d'une alimentation en courant

  4. Clean Hands Count

    Medline Plus

    Full Text Available ... starting stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with ... ads? Get YouTube Red. Working... Not now Try it free Find out why Close Clean Hands Count ...

  5. Photon-Counting Arrays for Time-Resolved Imaging

    Directory of Open Access Journals (Sweden)

    I. Michel Antolovic

    2016-06-01

    Full Text Available The paper presents a camera comprising 512 × 128 pixels capable of single-photon detection and gating with a maximum frame rate of 156 kfps. The photon capture is performed through a gated single-photon avalanche diode that generates a digital pulse upon photon detection and through a digital one-bit counter. Gray levels are obtained through multiple counting and accumulation, while time-resolved imaging is achieved through a 4-ns gating window controlled with subnanosecond accuracy by a field-programmable gate array. The sensor, which is equipped with microlenses to enhance its effective fill factor, was electro-optically characterized in terms of sensitivity and uniformity. Several examples of capture of fast events are shown to demonstrate the suitability of the approach.

  6. Use of Hi-resolution data for evaluating accuracy of traffic volume counts collected by microwave sensors

    Directory of Open Access Journals (Sweden)

    David K. Chang

    2017-10-01

    Full Text Available Over the past few years, the Utah Department of Transportation has developed the signal performance metrics (SPMs system to evaluate the performance of signalized intersections dynamically. This system currently provides data summaries for several performance measures, one of them being turning movement counts collected by microwave sensors. As this system became public, there was a need to evaluate the accuracy of the data placed on the SPMs. A large-scale data collection was carried out to meet this need. Vehicles in the Hi-resolution data from microwave sensors were matched with the vehicles by ground-truth volume count data. Matching vehicles from the microwave sensor data and the ground-truth data manually collected required significant effort. A spreadsheet-based data analysis procedure was developed to carry out the task. A mixed model analysis of variance was used to analyze the effects of the factors considered on turning volume count accuracy. The analysis found that approach volume level and number of approach lanes would have significant effect on the accuracy of turning volume counts but the location of the sensors did not significantly affect the accuracy of turning volume counts. In addition, it was found that the location of lanes in relation to the sensor did not significantly affect the accuracy of lane-by-lane volume counts. This indicated that accuracy analysis could be performed by using total approach volumes without comparing specific turning counts, that is, left-turn, through and right-turn movements. In general, the accuracy of approach volume counts collected by microwave sensors were within the margin of error that traffic engineers could accept. The procedure taken to perform the analysis and a summary of accuracy of volume counts for the factor combinations considered are presented in this paper.

  7. On Selection of the Probability Distribution for Representing the Maximum Annual Wind Speed in East Cairo, Egypt

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh. I.; El-Hemamy, S.T.

    2013-01-01

    The main objective of this paper is to identify an appropriate probability model and best plotting position formula which represent the maximum annual wind speed in east Cairo. This model can be used to estimate the extreme wind speed and return period at a particular site as well as to determine the radioactive release distribution in case of accident occurrence at a nuclear power plant. Wind speed probabilities can be estimated by using probability distributions. An accurate determination of probability distribution for maximum wind speed data is very important in expecting the extreme value . The probability plots of the maximum annual wind speed (MAWS) in east Cairo are fitted to six major statistical distributions namely: Gumbel, Weibull, Normal, Log-Normal, Logistic and Log- Logistic distribution, while eight plotting positions of Hosking and Wallis, Hazen, Gringorten, Cunnane, Blom, Filliben, Benard and Weibull are used for determining exceedance of their probabilities. A proper probability distribution for representing the MAWS is selected by the statistical test criteria in frequency analysis. Therefore, the best plotting position formula which can be used to select appropriate probability model representing the MAWS data must be determined. The statistical test criteria which represented in: the probability plot correlation coefficient (PPCC), the root mean square error (RMSE), the relative root mean square error (RRMSE) and the maximum absolute error (MAE) are used to select the appropriate probability position and distribution. The data obtained show that the maximum annual wind speed in east Cairo vary from 44.3 Km/h to 96.1 Km/h within duration of 39 years . Weibull plotting position combined with Normal distribution gave the highest fit, most reliable, accurate predictions and determination of the wind speed in the study area having the highest value of PPCC and lowest values of RMSE, RRMSE and MAE

  8. The determination of airborne concentrations of radon and thoron progeny by repetitive alpha counting of filter samples

    International Nuclear Information System (INIS)

    French, Clayton S. Jr.; Skrable, Kenneth W.; Chabot, George E.

    1978-01-01

    Analytical equations have been used to determine the airborne concentrations of the particulate daughters of radon and thoron from five net alpha counts obtained at preset time intervals post sampling. The same expressions were used to propagate the associated standard deviations. These propagated errors were minimized by the selection of optimum sampling and counting intervals. An extensive error analysis examined sources of interference and their influence on the calculated concentrations. This system offers sufficient precision for research applications, yet is simple and inexpensive enough for application in field studies. The radon and thoron daughters measured with this technique are 218 Po, 214 Pb, 214 Bi, 212 Pb, and 212 Bi. Because of the decay kinetics involved, the calculated concentrations of 218 Po and 212 Bi involve the greatest uncertainty. The proper choice of sampling and counting intervals optimizes the system for any one of the above radionuclides or for all five collectively. A sampling time of 15 minutes is best for the simultaneous estimation of all five concentrations. Millipore filter samples were obtained from a large, unventilated sub-basement of the University of Lowell research reactor facility and were counted later in direct contact with the window of a gas flow proportional detector having alpha particle counting efficiencies near 0.4 ca -1 and an alpha background of about 1 c min -1 . A typical 15 minute sample obtained at a flow rate of 2 x 10 4 cm 3 min -1 yielded estimates of the airborne concentrations and relative standard deviations: 218 Po, 4.75 x 10 -9 μCi cm -3 ± 18.9%; 214 Pb, 5.15 x 10 -9 μCi cm -3 ± 2.5%; 214 Bi, 4.86 x 10 -9 μCi cm -3 ± 2.4%; 212 Pb, 1.41 x 10 -10 μCi cm -3 ± 2.0%; and 212 Bi, 2.15 x 10 -10 μCi cm -3 ± 27.0%. (author)

  9. Count-rate analysis from clinical scans in PET with LSO detectors

    International Nuclear Information System (INIS)

    Bonutti, F.; Cattaruzzi, E.; Cragnolini, E.; Floreani, M.; Foti, C.; Malisan, M. R.; Moretti, E.; Geatti, O.; Padovani, R.

    2008-01-01

    The purpose of optimising the acquisition parameters in positron emission tomography is to improve the quality of the diagnostic images. Optimisation can be done by maximising the noise equivalent count rate (NECR) that in turn depends on the coincidence rate. For each bed position the scanner records coincidences and singles rates. For each patient, the true, random and scattered coincidences as functions of the single count rate(s) are determined by fitting the NEMA (National Electrical Manufacturers Association) 70 cm phantom count rate curves to measured clinical points. This enables analytical calculation of the personalised PNECR [pseudo NECR(s)] curve, linked to the NECR curve. For central bed positions, missing activity of ∼70% is estimated to get maximum PNECR (PNECR max ), but the improvement in terms of signal-to-noise ratio would be ∼15%. The correlation between patient weight and PNECR max is also estimated to determine the optimal scan duration of a single bed position as a function of patient weight at the same PNEC. Normalising the counts at PNECR max for the 70 kg patient, the bed duration for a 90 kg patient should be 230 s, which is ∼30% longer. Although the analysis indicates that the fast scanner electronics allow using higher administered activities, this would involve poor improvement in terms of NECR. Instead, attending to higher bed duration for heavier patients may be more useful. (authors)

  10. Quantitative annular dark field scanning transmission electron microscopy for nanoparticle atom-counting: What are the limits?

    International Nuclear Information System (INIS)

    De Backer, A; De Wael, A; Gonnissen, J; Martinez, G T; Béché, A; Van Aert, S; MacArthur, K E; Jones, L; Nellist, P D

    2015-01-01

    Quantitative atomic resolution annular dark field scanning transmission electron microscopy (ADF STEM) has become a powerful technique for nanoparticle atom-counting. However, a lot of nanoparticles provide a severe characterisation challenge because of their limited size and beam sensitivity. Therefore, quantitative ADF STEM may greatly benefit from statistical detection theory in order to optimise the instrumental microscope settings such that the incoming electron dose can be kept as low as possible whilst still retaining single-atom precision. The principles of detection theory are used to quantify the probability of error for atom-counting. This enables us to decide between different image performance measures and to optimise the experimental detector settings for atom-counting in ADF STEM in an objective manner. To demonstrate this, ADF STEM imaging of an industrial catalyst has been conducted using the near-optimal detector settings. For this experiment, we discussed the limits for atomcounting diagnosed by combining a thorough statistical method and detailed image simulations. (paper)

  11. Count-to-count time interval distribution analysis in a fast reactor

    International Nuclear Information System (INIS)

    Perez-Navarro Gomez, A.

    1973-01-01

    The most important kinetic parameters have been measured at the zero power fast reactor CORAL-I by means of the reactor noise analysis in the time domain, using measurements of the count-to-count time intervals. (Author) 69 refs

  12. On the errors in measurements of Ohio 5 radio sources in the light of the GB survey

    International Nuclear Information System (INIS)

    Machalski, J.

    1975-01-01

    Positions and flux densities of 405 OSU 5 radio sources surveyed at 1415 MHz down to 0.18 f.u. (Brundage et al. 1971) have been examined in the light of data from the GB survey made at 1400 MHz (Maslowski 1972). An identification analysis has shown that about 56% of OSU sources reveal themselves as single, 18% - as confused, 20% - as unresolved and 6% - having no counterparts in the GB survey down to 0.09 f.u. - seem to be spurious. The single OSU sources are strongly affected by the underestimation of their flux densities due to base-line procedure in their vicinity. The average value of about 0.03 f.u. has been found for the systematic underestimation. The second systematic error is due to the presence of a significant number of confused sources with strong overestimation of their flux densities. The confusion effect gives a characteristic non-Gaussian tail in the difference distribution between observed and real flux densities. The confusion effect has a strong influence on source counts from the OSU 5 survey. Differential number-counts relatively to that from the GB survey shows that the counts agree between themselves within the statistical uncertainty up to about 0.40 f.u., which is approximately 4 delta (delta - average rms flux density error in the OSU 5 survey). Below 0.40 f.u. the number of sources missing due to the confusion effect is significantly greater than the number-overestimation due to the noise error. Thus, this part of the OSU 5 source counts cannot be treated seriously, even in the statistical sense. An analysis of the approximate reliability and completeness of the OSU 5 survey shows that, although the total reliability estimated by the authors of the survey is good, the completeness is significantly lower due to the underestimation of the confusion effect magnitude. In fact, the OSU 5 completeness is 67% at 0.18 f.u. and 79% at 0.25 f.u. (author)

  13. Determing and monitoring of maximum permissible power for HWRR-3

    International Nuclear Information System (INIS)

    Jia Zhanli; Xiao Shigang; Jin Huajin; Lu Changshen

    1987-01-01

    The operating power of a reactor is an important parameter to be monitored. This report briefly describes the determining and monitoring of maximum permissiable power for HWRR-3. The calculating method is described, and the result of calculation and analysis of error are also given. On-line calculation and real time monitoring have been realized at the heavy water reactor. It provides the reactor with a real time and reliable supervision. This makes operation convenient and increases reliability

  14. Where can pixel counting area estimates meet user-defined accuracy requirements?

    Science.gov (United States)

    Waldner, François; Defourny, Pierre

    2017-08-01

    Pixel counting is probably the most popular way to estimate class areas from satellite-derived maps. It involves determining the number of pixels allocated to a specific thematic class and multiplying it by the pixel area. In the presence of asymmetric classification errors, the pixel counting estimator is biased. The overarching objective of this article is to define the applicability conditions of pixel counting so that the estimates are below a user-defined accuracy target. By reasoning in terms of landscape fragmentation and spatial resolution, the proposed framework decouples the resolution bias and the classifier bias from the overall classification bias. The consequence is that prior to any classification, part of the tolerated bias is already committed due to the choice of the spatial resolution of the imagery. How much classification bias is affordable depends on the joint interaction of spatial resolution and fragmentation. The method was implemented over South Africa for cropland mapping, demonstrating its operational applicability. Particular attention was paid to modeling a realistic sensor's spatial response by explicitly accounting for the effect of its point spread function. The diagnostic capabilities offered by this framework have multiple potential domains of application such as guiding users in their choice of imagery and providing guidelines for space agencies to elaborate the design specifications of future instruments.

  15. Stability and reproducibility of gel-suspension samples for the liquid scintillation counting of 14C using N-lauroyl-L-glutamic-α,γ-dibutylamide

    International Nuclear Information System (INIS)

    Wakabayashi, G.; Ohura, H.; Okai, T.; Matoba, M.

    1999-01-01

    Stability and reproducibility of gel-suspension method for 14 C activity measurement. Commercially available gelling agent, N-lauroyl-L-glutamic-α,γ-dibutylamide, was used for the gel-formation of the samples. No change of the counting rate for the gel-suspension sample was observed for more than 2 years after the sample preparation. Four samples used for checking the reproducibility of the sample preparation method. The same values were obtained for the counting rate of 14 C activity within the counting error. No change of the counting rate was observed for the 're-gelated' sample. These results show that the gel-suspension method is appropriate for the 14 C activity measurement by the liquid scintillation method and is useful for a long-term preservation of the sample for repeated measurement. (author)

  16. MOSS-5: A Fast Method of Approximating Counts of 5-Node Graphlets in Large Graphs

    KAUST Repository

    Wang, Pinghui

    2017-09-26

    Counting 3-, 4-, and 5-node graphlets in graphs is important for graph mining applications such as discovering abnormal/evolution patterns in social and biology networks. In addition, it is recently widely used for computing similarities between graphs and graph classification applications such as protein function prediction and malware detection. However, it is challenging to compute these metrics for a large graph or a large set of graphs due to the combinatorial nature of the problem. Despite recent efforts in counting triangles (a 3-node graphlet) and 4-node graphlets, little attention has been paid to characterizing 5-node graphlets. In this paper, we develop a computationally efficient sampling method to estimate 5-node graphlet counts. We not only provide fast sampling methods and unbiased estimators of graphlet counts, but also derive simple yet exact formulas for the variances of the estimators which is of great value in practice-the variances can be used to bound the estimates\\' errors and determine the smallest necessary sampling budget for a desired accuracy. We conduct experiments on a variety of real-world datasets, and the results show that our method is several orders of magnitude faster than the state-of-the-art methods with the same accuracy.

  17. Cross-Cultural and Intra-Cultural Differences in Finger-Counting Habits and Number Magnitude Processing: Embodied Numerosity in Canadian and Chinese University Students

    Directory of Open Access Journals (Sweden)

    Kyle Richard Morrissey

    2016-04-01

    Full Text Available Recent work in numerical cognition has shown-that number magnitude is not entirely abstract, and at least partly rooted in embodied and situated experiences, including finger-counting. The current study extends previous cross-cultural research to address within-culture individual differences in finger counting habits. Results indicated that Canadian participants demonstrated an additional cognitive load when comparing numbers that require more than one hand to represent, and this pattern of performance is further modulated by whether they typically start counting on their left hand or their right hand. Chinese students typically count on only one hand and so show no such effect, except for an increase in errors, similar to that seen in Canadians, for those whom self-identify as predominantly two-hand counters. Results suggest that the impact of finger counting habits extend beyond cultural experience and concord in predictable ways with differences in number magnitude processing for specific number-digits. We conclude that symbolic number magnitude processing is partially rooted in learned finger-counting habits, consistent with a motor simulation account of embodied numeracy and that argument is supported by both cross-cultural and within-culture differences in finger-counting habits.

  18. MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.

  19. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  20. Maximum Power Point Tracking Based on Sliding Mode Control

    Directory of Open Access Journals (Sweden)

    Nimrod Vázquez

    2015-01-01

    Full Text Available Solar panels, which have become a good choice, are used to generate and supply electricity in commercial and residential applications. This generated power starts with the solar cells, which have a complex relationship between solar irradiation, temperature, and output power. For this reason a tracking of the maximum power point is required. Traditionally, this has been made by considering just current and voltage conditions at the photovoltaic panel; however, temperature also influences the process. In this paper the voltage, current, and temperature in the PV system are considered to be a part of a sliding surface for the proposed maximum power point tracking; this means a sliding mode controller is applied. Obtained results gave a good dynamic response, as a difference from traditional schemes, which are only based on computational algorithms. A traditional algorithm based on MPPT was added in order to assure a low steady state error.

  1. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  2. Comparison of Drive Counts and Mark-Resight As Methods of Population Size Estimation of Highly Dense Sika Deer (Cervus nippon Populations.

    Directory of Open Access Journals (Sweden)

    Kazutaka Takeshita

    Full Text Available Assessing temporal changes in abundance indices is an important issue in the management of large herbivore populations. The drive counts method has been frequently used as a deer abundance index in mountainous regions. However, despite an inherent risk for observation errors in drive counts, which increase with deer density, evaluations of the utility of drive counts at a high deer density remain scarce. We compared the drive counts and mark-resight (MR methods in the evaluation of a highly dense sika deer population (MR estimates ranged between 11 and 53 individuals/km2 on Nakanoshima Island, Hokkaido, Japan, between 1999 and 2006. This deer population experienced two large reductions in density; approximately 200 animals in total were taken from the population through a large-scale population removal and a separate winter mass mortality event. Although the drive counts tracked temporal changes in deer abundance on the island, they overestimated the counts for all years in comparison to the MR method. Increased overestimation in drive count estimates after the winter mass mortality event may be due to a double count derived from increased deer movement and recovery of body condition secondary to the mitigation of density-dependent food limitations. Drive counts are unreliable because they are affected by unfavorable factors such as bad weather, and they are cost-prohibitive to repeat, which precludes the calculation of confidence intervals. Therefore, the use of drive counts to infer the deer abundance needs to be reconsidered.

  3. Evaluation of inter-fraction error during prostate radiotherapy

    International Nuclear Information System (INIS)

    Komiyama, Takafumi; Nakamura, Koji; Motoyama, Tsuyoshi; Onishi, Hiroshi; Sano, Naoki

    2008-01-01

    The purpose of this study was to evaluate inter-fraction error (inter-fraction set-up error+inter-fraction internal organ motion) between treatment planning and delivery during radiotherapy for localized prostate cancer. Twenty three prostate cancer patients underwent image-guided radical irradiation with the CT-linac system. All patients were treated in the supine position. After set-up with external skin markers, using CT-linac system, pretherapy CT images were obtained and isocenter displacement was measured. The mean displacement of the isocenter was 1.8 mm, 3.3 mm, and 1.7 mm in the left-right, ventral-dorsal, and cranial-caudal directions, respectively. The maximum displacement of the isocenter was 7 mm, 12 mm, and 9 mm in the left-right, ventral-dorsal, and cranial-caudal directions, respectively. The mean interquartile range of displacement of the isocenter was 1.8 mm, 3.7 mm, and 2.0 mm in the left-right, ventral-dorsal, and cranial-caudal directions, respectively. In radiotherapy for localized prostate cancer, inter-fraction error was largest in the ventral-dorsal directions. Errors in the ventral-dorsal directions influence both local control and late adverse effects. Our study suggested the set-up with external skin markers was not enough for radical radiotherapy for localized prostate cancer, thereby those such as a CT-linac system for correction of inter-fraction error being required. (author)

  4. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  5. Isospectral discrete and quantum graphs with the same flip counts and nodal counts

    Science.gov (United States)

    Juul, Jonas S.; Joyner, Christopher H.

    2018-06-01

    The existence of non-isomorphic graphs which share the same Laplace spectrum (to be referred to as isospectral graphs) leads naturally to the following question: what additional information is required in order to resolve isospectral graphs? It was suggested by Band, Shapira and Smilansky that this might be achieved by either counting the number of nodal domains or the number of times the eigenfunctions change sign (the so-called flip count) (Band et al 2006 J. Phys. A: Math. Gen. 39 13999–4014 Band and Smilansky 2007 Eur. Phys. J. Spec. Top. 145 171–9). Recent examples of (discrete) isospectral graphs with the same flip count and nodal count have been constructed by Ammann by utilising Godsil–McKay switching (Ammann private communication). Here, we provide a simple alternative mechanism that produces systematic examples of both discrete and quantum isospectral graphs with the same flip and nodal counts.

  6. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    International Nuclear Information System (INIS)

    Laurence, T.; Chromy, B.

    2010-01-01

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  7. Simple analytical technique for liquid scintillation counting of environmental carbon-14 using gel suspension method

    International Nuclear Information System (INIS)

    Okai, Tomio; Wakabayashi, Genichiro; Nagao, Kenjiro; Matoba, Masaru; Ohura, Hirotaka; Momoshima, Noriyuki; Kawamura, Hidehisa

    2000-01-01

    A simple analytical technique for liquid scintillation counting of environmental 14 C was developed. Commercially available gelling agent, N-lauroyl-L -glutamic -α,γ-dibutylamide, was used for the gel-formation of the samples (gel suspension method) and for the subsequent liquid scintillation counting of 14 C in the form of CaCO 3 . Our procedure for sample preparation is much simpler than that of the conventional methods and requires no special equipment. Self absorption, stability and reproducibility of gel suspension samples were investigated in order to evaluate the characteristics of the gel suspension method for 14 C activity measurement. The self absorption factor is about 70% and slightly decrease as CaCO 3 weight increase. This is considered to be mainly due to the absorption of β-rays and scintillation light by the CaCO 3 sample itself. No change of the counting rate for the gel suspension sample was observed for more than 2 years after the sample preparation. Four samples were used for checking the reproducibility of the sample preparation method. The same values were obtained for the counting rate of 24 C activity within the counting error. No change of the counting rate was observed for the 're-gelated' sample. These results show that the gel suspension method is appropriate for the 14 C activity measurement by the liquid scintillation counting method and useful for a long-term preservation of the sample for repeated measurement. The above analytical technique was applied to actual environmental samples in Fukuoka prefecture, Japan. Results obtained were comparable with those by other researchers and appear to be reasonable. Therefore, the newly developed technique is useful for the routine monitoring of environmental 14 C. (author)

  8. Counting It Twice.

    Science.gov (United States)

    Schattschneider, Doris

    1991-01-01

    Provided are examples from many domains of mathematics that illustrate the Fubini Principle in its discrete version: the value of a summation over a rectangular array is independent of the order of summation. Included are: counting using partitions as in proof by pictures, combinatorial arguments, indirect counting as in the inclusion-exclusion…

  9. Low White Blood Cell Count

    Science.gov (United States)

    Symptoms Low white blood cell count By Mayo Clinic Staff A low white blood cell count (leukopenia) is a decrease ... of white blood cell (neutrophil). The definition of low white blood cell count varies from one medical ...

  10. Maximum entropy method approach to the θ term

    International Nuclear Information System (INIS)

    Imachi, Masahiro; Shinno, Yasuhiko; Yoneyama, Hiroshi

    2004-01-01

    In Monte Carlo simulations of lattice field theory with a θ term, one confronts the complex weight problem, or the sign problem. This is circumvented by performing the Fourier transform of the topological charge distribution P(Q). This procedure, however, causes flattening phenomenon of the free energy f(θ), which makes study of the phase structure unfeasible. In order to treat this problem, we apply the maximum entropy method (MEM) to a Gaussian form of P(Q), which serves as a good example to test whether the MEM can be applied effectively to the θ term. We study the case with flattering as well as that without flattening. In the latter case, the results of the MEM agree with those obtained from the direct application of the Fourier transform. For the former, the MEM gives a smoother f(θ) than that of the Fourier transform. Among various default models investigated, the images which yield the least error do not show flattening, although some others cannot be excluded given the uncertainly related to statistical error. (author)

  11. Long term observation on absolute lymphocyte counts in the adult health study sample, Hiroshima and Nagasaki

    International Nuclear Information System (INIS)

    Oesterle, S.N.; Norman, J.E. Jr.

    1980-01-01

    Total peripheral blood lymphocytes were evaluated by age and exposure status in the Adult Health Study population during three examination cycles between 1958 and 1972. No radiation effect was observed, but a significant drop in the absolute lymphocyte counts of those aged 70 years and over and a corresponding maximum for persons aged 50 - 59 was observed. (author)

  12. On the use of liquid scintillation counting of 51Cr and 14C in the twin tracer method of measuring assimilation efficiency

    International Nuclear Information System (INIS)

    Cammen, L.M.

    1977-01-01

    Calow and Fletcher (1972) calculated assimilation efficiency from the ratio of an assimilated radiotracer ( 14 C) to a non-assimilated tracer ( 51 Cr) in food and feces. Wightman (1975) improved the efficiency of their technique by using liquid scintillation to count both isotopes simultaneously, but stated incorrectly that it was not necessary to convert count per minute (CPM) to disintegration per minute (DPM). Unless the CPM data are corrected for quenching and converted to DPM prior to calculation of assimilation efficiency, a significant error may be introduced. (orig.) [de

  13. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  14. Correction of count losses due to deadtime on a DST-XLi (SMVi-GE) camera during dosimetric studies in patients injected with iodine-131

    International Nuclear Information System (INIS)

    Delpon, G.; Ferrer, L.; Lisbona, A.; Bardies, M.

    2002-01-01

    In dosimetric studies performed after therapeutic injection, it is essential to correct count losses due to deadtime on the gamma camera. This note describes four deadtime correction methods, one based on the use of a standard source without preliminary calibration, and three requiring specific calibration and based on the count rate observed in different spectrometric windows (20%, 20% plus a lower energy window and the full spectrum of 50-750 keV). Experiments were conducted on a phantom at increasingly higher count rates to check correction accuracy with the different methods. The error was less than +7% with a standard source, whereas count-rate-based methods gave more accurate results. On the assumption that the model was paralysable, preliminary calibration allowed an observed count rate curve to be plotted as a function of the real count rate. The use of the full spectrum led to a 3.0% underestimation for the highest activity imaged. As count losses depend on photon flux independent of energy, the use of the full spectrum during measurement allowed scatter conditions to be taken into account. A protocol was developed to apply this correction method to whole-body acquisitions. (author)

  15. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    Science.gov (United States)

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  16. Hanford whole body counting manual

    International Nuclear Information System (INIS)

    Palmer, H.E.; Brim, C.P.; Rieksts, G.A.; Rhoads, M.C.

    1987-05-01

    This document, a reprint of the Whole Body Counting Manual, was compiled to train personnel, document operation procedures, and outline quality assurance procedures. The current manual contains information on: the location, availability, and scope of services of Hanford's whole body counting facilities; the administrative aspect of the whole body counting operation; Hanford's whole body counting facilities; the step-by-step procedure involved in the different types of in vivo measurements; the detectors, preamplifiers and amplifiers, and spectroscopy equipment; the quality assurance aspect of equipment calibration and recordkeeping; data processing, record storage, results verification, report preparation, count summaries, and unit cost accounting; and the topics of minimum detectable amount and measurement accuracy and precision. 12 refs., 13 tabs

  17. Negative binomial mixed models for analyzing microbiome count data.

    Science.gov (United States)

    Zhang, Xinyan; Mallick, Himel; Tang, Zaixiang; Zhang, Lei; Cui, Xiangqin; Benson, Andrew K; Yi, Nengjun

    2017-01-03

    Recent advances in next-generation sequencing (NGS) technology enable researchers to collect a large volume of metagenomic sequencing data. These data provide valuable resources for investigating interactions between the microbiome and host environmental/clinical factors. In addition to the well-known properties of microbiome count measurements, for example, varied total sequence reads across samples, over-dispersion and zero-inflation, microbiome studies usually collect samples with hierarchical structures, which introduce correlation among the samples and thus further complicate the analysis and interpretation of microbiome count data. In this article, we propose negative binomial mixed models (NBMMs) for detecting the association between the microbiome and host environmental/clinical factors for correlated microbiome count data. Although having not dealt with zero-inflation, the proposed mixed-effects models account for correlation among the samples by incorporating random effects into the commonly used fixed-effects negative binomial model, and can efficiently handle over-dispersion and varying total reads. We have developed a flexible and efficient IWLS (Iterative Weighted Least Squares) algorithm to fit the proposed NBMMs by taking advantage of the standard procedure for fitting the linear mixed models. We evaluate and demonstrate the proposed method via extensive simulation studies and the application to mouse gut microbiome data. The results show that the proposed method has desirable properties and outperform the previously used methods in terms of both empirical power and Type I error. The method has been incorporated into the freely available R package BhGLM ( http://www.ssg.uab.edu/bhglm/ and http://github.com/abbyyan3/BhGLM ), providing a useful tool for analyzing microbiome data.

  18. 15Mcps photon-counting X-ray computed tomography system using a ZnO-MPPC detector and its application to gadolinium imaging.

    Science.gov (United States)

    Sato, Eiichi; Sugimura, Shigeaki; Endo, Haruyuki; Oda, Yasuyuki; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Osawa, Akihiro; Matsukiyo, Hiroshi; Enomoto, Toshiyuki; Watanabe, Manabu; Kusachi, Shinya; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun

    2012-01-01

    15Mcps photon-counting X-ray computed tomography (CT) system is a first-generation type and consists of an X-ray generator, a turntable, a translation stage, a two-stage controller, a detector consisting of a 2mm-thick zinc-oxide (ZnO) single-crystal scintillator and an MPPC (multipixel photon counter) module, a counter card (CC), and a personal computer (PC). High-speed photon counting was carried out using the detector in the X-ray CT system. The maximum count rate was 15Mcps (mega counts per second) at a tube voltage of 100kV and a tube current of 1.95mA. Tomography is accomplished by repeated translations and rotations of an object, and projection curves of the object are obtained by the translation. The pulses of the event signal from the module are counted by the CC in conjunction with the PC. The minimum exposure time for obtaining a tomogram was 15min, and photon-counting CT was accomplished using gadolinium-based contrast media. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Low-noise low-jitter 32-pixels CMOS single-photon avalanche diodes array for single-photon counting from 300 nm to 900 nm

    Energy Technology Data Exchange (ETDEWEB)

    Scarcella, Carmelo; Tosi, Alberto, E-mail: alberto.tosi@polimi.it; Villa, Federica; Tisa, Simone; Zappa, Franco [Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy)

    2013-12-15

    We developed a single-photon counting multichannel detection system, based on a monolithic linear array of 32 CMOS SPADs (Complementary Metal-Oxide-Semiconductor Single-Photon Avalanche Diodes). All channels achieve a timing resolution of 100 ps (full-width at half maximum) and a photon detection efficiency of 50% at 400 nm. Dark count rate is very low even at room temperature, being about 125 counts/s for 50 μm active area diameter SPADs. Detection performance and microelectronic compactness of this CMOS SPAD array make it the best candidate for ultra-compact time-resolved spectrometers with single-photon sensitivity from 300 nm to 900 nm.

  1. Detection of anomalies in radio tomography of asteroids: Source count and forward errors

    Science.gov (United States)

    Pursiainen, S.; Kaasalainen, M.

    2014-09-01

    The purpose of this study was to advance numerical methods for radio tomography in which asteroid's internal electric permittivity distribution is to be recovered from radio frequency data gathered by an orbiter. The focus was on signal generation via multiple sources (transponders) providing one potential, or even essential, scenario to be implemented in a challenging in situ measurement environment and within tight payload limits. As a novel feature, the effects of forward errors including noise and a priori uncertainty of the forward (data) simulation were examined through a combination of the iterative alternating sequential (IAS) inverse algorithm and finite-difference time-domain (FDTD) simulation of time evolution data. Single and multiple source scenarios were compared in two-dimensional localization of permittivity anomalies. Three different anomaly strengths and four levels of total noise were tested. Results suggest, among other things, that multiple sources can be necessary to obtain appropriate results, for example, to distinguish three separate anomalies with permittivity less or equal than half of the background value, relevant in recovery of internal cavities.

  2. Ac-dc converter firing error detection

    International Nuclear Information System (INIS)

    Gould, O.L.

    1996-01-01

    Each of the twelve Booster Main Magnet Power Supply modules consist of two three-phase, full-wave rectifier bridges in series to provide a 560 VDC maximum output. The harmonic contents of the twelve-pulse ac-dc converter output are multiples of the 60 Hz ac power input, with a predominant 720 Hz signal greater than 14 dB in magnitude above the closest harmonic components at maximum output. The 720 Hz harmonic is typically greater than 20 dB below the 500 VDC output signal under normal operation. Extracting specific harmonics from the rectifier output signal of a 6, 12, or 24 pulse ac-dc converter allows the detection of SCR firing angle errors or complete misfires. A bandpass filter provides the input signal to a frequency-to-voltage converter. Comparing the output of the frequency-to-voltage converter to a reference voltage level provides an indication of the magnitude of the harmonics in the ac-dc converter output signal

  3. Prediction of in vivo background in phoswich lung count spectra

    International Nuclear Information System (INIS)

    Richards, N.W.

    1999-01-01

    Phoswich scintillation counters are used to detect actinides deposited in the lungs. The resulting spectra, however, contain Compton background from the decay of 40 K, which occurs naturally in the striated muscle tissue of the body. To determine the counts due to actinides in a lung count spectrum, the counts due to 40 K scatter must first be subtracted out. The 40 K background in the phoswich NaI(Tl) spectrum was predicted from an energy region of interest called the monitor region, which is above the 238 Pu region and the 241 Am region, where photopeaks from 238 Pu and 241 Am region, where photopeaks from 238 Pu and 241 Am occur. Empirical models were developed to predict the backgrounds in the 238 Pu and 241 Am regions by testing multiple linear and nonlinear regression models. The initial multiple regression models contain a monitor region variable as well as the variables gender, (weight/height) α , and interaction terms. Data were collected from 64 male and 63 female subjects with no internal exposure. For the 238 Pu region, the only significant predictor was found to be the monitor region. For the 241 Am region, the monitor region was found to have the greatest effect on prediction, while gender was significantly only when weight/height was included in a model. Gender-specific models were thus developed. The empirical models for the 241 Am region that contain weight/height were shown to have the best coefficients of determination (R 2 ) and the lowest mean squares for error (MSE)

  4. CalCOFI Egg Counts

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Fish egg counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets], and...

  5. Approximation for maximum pressure calculation in containment of PWR reactors

    International Nuclear Information System (INIS)

    Souza, A.L. de

    1989-01-01

    A correlation was developed to estimate the maximum pressure of dry containment of PWR following a Loss-of-Coolant Accident - LOCA. The expression proposed is a function of the total energy released to the containment by the primary circuit, of the free volume of the containment building and of the total surface are of the heat-conducting structures. The results show good agreement with those present in Final Safety Analysis Report - FSAR of several PWR's plants. The errors are in the order of ± 12%. (author) [pt

  6. Count-doubling time safety circuit

    International Nuclear Information System (INIS)

    Keefe, D.J.; McDowell, W.P.; Rusch, G.K.

    1981-01-01

    There is provided a nuclear reactor count-factor-increase time monitoring circuit which includes a pulse-type neutron detector, and means for counting the number of detected pulses during specific time periods. Counts are compared and the comparison is utilized to develop a reactor scram signal, if necessary

  7. Count-doubling time safety circuit

    Science.gov (United States)

    Rusch, Gordon K.; Keefe, Donald J.; McDowell, William P.

    1981-01-01

    There is provided a nuclear reactor count-factor-increase time monitoring circuit which includes a pulse-type neutron detector, and means for counting the number of detected pulses during specific time periods. Counts are compared and the comparison is utilized to develop a reactor scram signal, if necessary.

  8. Neural Modeling of Fuzzy Controllers for Maximum Power Point Tracking in Photovoltaic Energy Systems

    Science.gov (United States)

    Lopez-Guede, Jose Manuel; Ramos-Hernanz, Josean; Altın, Necmi; Ozdemir, Saban; Kurt, Erol; Azkune, Gorka

    2018-06-01

    One field in which electronic materials have an important role is energy generation, especially within the scope of photovoltaic energy. This paper deals with one of the most relevant enabling technologies within that scope, i.e, the algorithms for maximum power point tracking implemented in the direct current to direct current converters and its modeling through artificial neural networks (ANNs). More specifically, as a proof of concept, we have addressed the problem of modeling a fuzzy logic controller that has shown its performance in previous works, and more specifically the dimensionless duty cycle signal that controls a quadratic boost converter. We achieved a very accurate model since the obtained medium squared error is 3.47 × 10-6, the maximum error is 16.32 × 10-3 and the regression coefficient R is 0.99992, all for the test dataset. This neural implementation has obvious advantages such as a higher fault tolerance and a simpler implementation, dispensing with all the complex elements needed to run a fuzzy controller (fuzzifier, defuzzifier, inference engine and knowledge base) because, ultimately, ANNs are sums and products.

  9. The Big Pumpkin Count.

    Science.gov (United States)

    Coplestone-Loomis, Lenny

    1981-01-01

    Pumpkin seeds are counted after students convert pumpkins to jack-o-lanterns. Among the activities involved, pupils learn to count by 10s, make estimates, and to construct a visual representation of 1,000. (MP)

  10. Binomial distribution of Poisson statistics and tracks overlapping probability to estimate total tracks count with low uncertainty

    International Nuclear Information System (INIS)

    Khayat, Omid; Afarideh, Hossein; Mohammadnia, Meisam

    2015-01-01

    In the solid state nuclear track detectors of chemically etched type, nuclear tracks with center-to-center neighborhood of distance shorter than two times the radius of tracks will emerge as overlapping tracks. Track overlapping in this type of detectors causes tracks count losses and it becomes rather severe in high track densities. Therefore, tracks counting in this condition should include a correction factor for count losses of different tracks overlapping orders since a number of overlapping tracks may be counted as one track. Another aspect of the problem is the cases where imaging the whole area of the detector and counting all tracks are not possible. In these conditions a statistical generalization method is desired to be applicable in counting a segmented area of the detector and the results can be generalized to the whole surface of the detector. Also there is a challenge in counting the tracks in densely overlapped tracks because not sufficient geometrical or contextual information are available. It this paper we present a statistical counting method which gives the user a relation between the tracks overlapping probabilities on a segmented area of the detector surface and the total number of tracks. To apply the proposed method one can estimate the total number of tracks on a solid state detector of arbitrary shape and dimensions by approximating the tracks averaged area, whole detector surface area and some orders of tracks overlapping probabilities. It will be shown that this method is applicable in high and ultra high density tracks images and the count loss error can be enervated using a statistical generalization approach. - Highlights: • A correction factor for count losses of different tracks overlapping orders. • For the cases imaging the whole area of the detector is not possible. • Presenting a statistical generalization method for segmented areas. • Giving a relation between the tracks overlapping probabilities and the total tracks

  11. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  12. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  13. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  14. Galaxy number counts: Pt. 2

    International Nuclear Information System (INIS)

    Metcalfe, N.; Shanks, T.; Fong, R.; Jones, L.R.

    1991-01-01

    Using the Prime Focus CCD Camera at the Isaac Newton Telescope we have determined the form of the B and R galaxy number-magnitude count relations in 12 independent fields for 21 m ccd m and 19 m ccd m 5. The average galaxy count relations lie in the middle of the wide range previously encompassed by photographic data. The field-to-field variation of the counts is small enough to define the faint (B m 5) galaxy count to ±10 per cent and this variation is consistent with that expected from galaxy clustering considerations. Our new data confirm that the B, and also the R, galaxy counts show evidence for strong galaxy luminosity evolution, and that the majority of the evolving galaxies are of moderately blue colour. (author)

  15. Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.

    Science.gov (United States)

    Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N

    2014-01-01

    Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites  =  approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.

  16. Human errors evaluation for muster in emergency situations applying human error probability index (HEPI, in the oil company warehouse in Hamadan City

    Directory of Open Access Journals (Sweden)

    2012-12-01

    Full Text Available Introduction: Emergency situation is one of the influencing factors on human error. The aim of this research was purpose to evaluate human error in emergency situation of fire and explosion at the oil company warehouse in Hamadan city applying human error probability index (HEPI. . Material and Method: First, the scenario of emergency situation of those situation of fire and explosion at the oil company warehouse was designed and then maneuver against, was performed. The scaled questionnaire of muster for the maneuver was completed in the next stage. Collected data were analyzed to calculate the probability success for the 18 actions required in an emergency situation from starting point of the muster until the latest action to temporary sheltersafe. .Result: The result showed that the highest probability of error occurrence was related to make safe workplace (evaluation phase with 32.4 % and lowest probability of occurrence error in detection alarm (awareness phase with 1.8 %, probability. The highest severity of error was in the evaluation phase and the lowest severity of error was in the awareness and recovery phase. Maximum risk level was related to the evaluating exit routes and selecting one route and choosy another exit route and minimum risk level was related to the four evaluation phases. . Conclusion: To reduce the risk of reaction in the exit phases of an emergency situation, the following actions are recommended, based on the finding in this study: A periodic evaluation of the exit phase and modifying them if necessary, conducting more maneuvers and analyzing this results along with a sufficient feedback to the employees.

  17. Determination of fission products and actinides by inductively coupled plasma-mass spectrometry using isotope dilution analysis. A study of random and systematic errors

    International Nuclear Information System (INIS)

    Ignacio Garcia Alonso, Jose

    1995-01-01

    The theory of the propagation of errors (random and systematic) for isotope dilution analysis (IDA) has been applied to the analysis of fission products and actinide elements by inductively coupled plasma-mass spectrometry (ICP-MS). Systematic errors in ID-ICP-MS arising from mass-discrimination (mass bias), detector non-linearity and isobaric interferences in the measured isotopes have to be corrected for in order to achieve accurate results. The mass bias factor and the detector dead-time can be determined by using natural elements with well-defined isotope abundances. A combined method for the simultaneous determination of both factors is proposed. On the other hand, isobaric interferences for some fission products and actinides cannot be eliminated using mathematical corrections (due to the unknown isotope abundances in the sample) and a chemical separation is necessary. The theory for random error propagation in IDA has been applied to the determination of non-natural elements by ICP-MS taking into account all possible sources of uncertainty with pulse counting detection. For the analysis of fission products, the selection of the right spike isotope composition and spike to sample ratio can be performed by applying conventional random propagation theory. However, it has been observed that, in the experimental determination of the isotope abundances of the fission product elements to be determined, the correction for mass-discrimination and the correction for detector dead-time losses contribute to the total random uncertainty. For the instrument used in the experimental part of this study, it was found that the random uncertainty on the measured isotope ratios followed Poisson statistics for low counting rates whereas, for high counting rates, source instability was the main source of error

  18. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  19. H I, galaxy counts, and reddening: Variation in the gas-to-dust ratio, the extinction at high galactic latitudes, and a new method for determining galactic reddening

    International Nuclear Information System (INIS)

    Burstein, D.; Heiles, C.

    1978-01-01

    We reanalyze the interrelationships among Shane-Wirtanen galaxy counts, H I column densities, and reddenings, and resolve many of the problems raised by Heiles. These problems were caused by two factors: subtle biases in the reddening data and a variable gas-to-dust ratio in the galaxy. We present a compilation of reddenings for RR Lyrae stars and globular clusters which are on the same system and which we believe to be relatively free of biases. The extinction at the galactic poles, as determined by galaxy counts, is reexamined by using a new method to analyze galaxy counts. This new method partially accounts for the nonrandom clustering of galaxies and permits a reasonable estimate of the error in log N/sub gal/ as a function of latitude. The analysis shows that the galaxy counts (or galaxy cluster counts) are too noisy to allow direct determination of the extinction, or variation in extinction, near the galactic poles. From all available data, we conclude that the reddening at the poles is small [< or =0.02 mag in E (B--V) over much of the north galactic pole] and irregularly distributed. We find that there are zero offsets in the relations between E (B--V) and H I, and between galaxy counts and H I, which are at least partly the result of an instrumental effect in the radio data. We also show that the gas-to-dust ratio can vary by a factor of 2 from the average, and we present two methods for correcting for this variability in predicting the reddening of objects which are located outside of the galactic absorbing layer. We present a prescription for predicting these reddenings; in the area of sky covered by the Shane-Wirtanen galaxy counts, the error in these predictions is, on average, less than 0.03 mag in E

  20. Photon counting arrays for AO wavefront sensors

    CERN Document Server

    Vallerga, J; McPhate, J; Mikulec, Bettina; Clark, Allan G; Siegmund, O; CERN. Geneva

    2005-01-01

    Future wavefront sensors for AO on large telescopes will require a large number of pixels and must operate at high frame rates. Unfortunately for CCDs, there is a readout noise penalty for operating faster, and this noise can add up rather quickly when considering the number of pixels required for the extended shape of a sodium laser guide star observed with a large telescope. Imaging photon counting detectors have zero readout noise and many pixels, but have suffered in the past with low QE at the longer wavelengths (>500 nm). Recent developments in GaAs photocathode technology, CMOS ASIC readouts and FPGA processing electronics have resulted in noiseless WFS detector designs that are competitive with silicon array detectors, though at ~40% the QE of CCDs. We review noiseless array detectors and compare their centroiding performance with CCDs using the best available characteristics of each. We show that for sub-aperture binning of 6x6 and greater that noiseless detectors have a smaller centroid error at flu...

  1. Counting statistics in radioactivity measurements

    International Nuclear Information System (INIS)

    Martin, J.

    1975-01-01

    The application of statistical methods to radioactivity measurement problems is analyzed in several chapters devoted successively to: the statistical nature of radioactivity counts; the application to radioactive counting of two theoretical probability distributions, Poisson's distribution law and the Laplace-Gauss law; true counting laws; corrections related to the nature of the apparatus; statistical techniques in gamma spectrometry [fr

  2. A scintillation detector signal processing technique with active pileup prevention for extending scintillation count rates

    International Nuclear Information System (INIS)

    Wong, W.H.; Li, H.

    1998-01-01

    A new method for processing signals from scintillation detectors is proposed for very high count-rate situations where multiple-event pileups are the norm. This method is designed to sort out and recover every impinging event from multiple-event pileups while maximizing the collection of scintillation signal for every event to achieve optimal accuracy in determining the energy of the event. For every detected event, this method cancels the remnant signals from previous events, and excludes the pileup of signals from following events. With this technique, pileup events can be recovered and the energy of every recovered event can be optimally measured despite multiple pileups. A prototype circuit demonstrated that the maximum count rates have been increased by more than 10 times, comparing to the standard pulse-shaping method, while the energy resolution is as good as that of the pulse shaping (or the fixed integration) method at normal count rates. At 2 x 10 6 events/sec for NaI(Tl), the true counts acquired are 3 times more than the delay-line clipping method (commonly used in fast processing designs) due to events recovered from pileups. Pulse-height spectra up to 3.5 x 10 6 events/sec have been studied

  3. Correlation between total lymphocyte count, hemoglobin, hematocrit and CD4 count in HIV patients in Nigeria.

    Science.gov (United States)

    Emuchay, Charles Iheanyichi; Okeniyi, Shemaiah Olufemi; Okeniyi, Joshua Olusegun

    2014-04-01

    The expensive and technology limited setting of CD4 count testing is a major setback to the initiation of HAART in a resource limited country like Nigeria. Simple and inexpensive tools such as Hemoglobin (Hb) measurement and Total Lymphocyte Count (TLC) are recommended as substitute marker. In order to assess the correlations of these parameters with CD4 count, 100 "apparently healthy" male volunteers tested HIV positive aged ≥ 20 years but ≤ 40 years were recruited and from whom Hb, Hct, TLC and CD4 count were obtained. The correlation coefficients, R, the Nash-Sutcliffe Coefficient of Efficiency (CoE) and the p-values of the ANOVA model of Hb, Hct and TLC with CD4 count were assessed. The assessments show that there is no significant relationship of any of these parameters with CD4 count and the correlation coefficients are very weak. This study shows that Hb, Hct and TLC cannot be substitute for CD4 count as this might lead to certain individuals' deprivation of required treatment.

  4. It counts who counts: an experimental evaluation of the importance of observer effects on spotlight count estimates

    DEFF Research Database (Denmark)

    Sunde, Peter; Jessen, Lonnie

    2013-01-01

    observers with respect to their ability to detect and estimate distance to realistic animal silhouettes at different distances. Detection probabilities were higher for observers experienced in spotlighting mammals than for inexperienced observers, higher for observers with a hunting background compared...... with non-hunters and decreased as function of age but were independent of sex or educational background. If observer-specific detection probabilities were applied to real counting routes, point count estimates from inexperienced observers without a hunting background would only be 43 % (95 % CI, 39...

  5. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    OpenAIRE

    Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.

    2012-01-01

    We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  6. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  7. You can count on the motor cortex: Finger counting habits modulate motor cortex activation evoked by numbers

    Science.gov (United States)

    Tschentscher, Nadja; Hauk, Olaf; Fischer, Martin H.; Pulvermüller, Friedemann

    2012-01-01

    The embodied cognition framework suggests that neural systems for perception and action are engaged during higher cognitive processes. In an event-related fMRI study, we tested this claim for the abstract domain of numerical symbol processing: is the human cortical motor system part of the representation of numbers, and is organization of numerical knowledge influenced by individual finger counting habits? Developmental studies suggest a link between numerals and finger counting habits due to the acquisition of numerical skills through finger counting in childhood. In the present study, digits 1 to 9 and the corresponding number words were presented visually to adults with different finger counting habits, i.e. left- and right-starters who reported that they usually start counting small numbers with their left and right hand, respectively. Despite the absence of overt hand movements, the hemisphere contralateral to the hand used for counting small numbers was activated when small numbers were presented. The correspondence between finger counting habits and hemispheric motor activation is consistent with an intrinsic functional link between finger counting and number processing. PMID:22133748

  8. Analysis of errors in the measurement of unattached fractions of radon and thoron progeny in a Canadian uranium mine using wire screen methods

    International Nuclear Information System (INIS)

    Khan, A.; Phillips, C.R.

    1987-01-01

    The unattached fraction of radon/thoron progeny in uranium mines is generally small and therefore difficult to measure accurately. The simple wire screen method provides a direct estimate of the unattached fraction from the screen count, or an indirect estimate from the difference between the reference and back-up filter counts. Wire screen method results are often difficult to analyse, especially when the unattached activity is small. Experimental data obtained in Canadian uranium mines are presented here, together with a detailed error analysis. The method consisting of counting the wire screen and the back-up filter is found to be the most precise method for unattached fraction determination. (author)

  9. Medication prescribing errors in a public teaching hospital in India: A prospective study.

    Directory of Open Access Journals (Sweden)

    Pote S

    2007-03-01

    Full Text Available Background: To prevent medication errors in prescribing, one needs to know their types and relative occurrence. Such errors are a great cause of concern as they have the potential to cause patient harm. The aim of this study was to determine the nature and types of medication prescribing errors in an Indian setting.Methods: The medication errors were analyzed in a prospective observational study conducted in 3 medical wards of a public teaching hospital in India. The medication errors were analyzed by means of Micromedex Drug-Reax database.Results: Out of 312 patients, only 304 were included in the study. Of the 304 cases, 103 (34% cases had at least one error. The total number of errors found was 157. The drug-drug interactions were the most frequently (68.2% occurring type of error, which was followed by incorrect dosing interval (12% and dosing errors (9.5%. The medication classes involved most were antimicrobial agents (29.4%, cardiovascular agents (15.4%, GI agents (8.6% and CNS agents (8.2%. The moderate errors contributed maximum (61.8% to the total errors when compared to the major (25.5% and minor (12.7% errors. The results showed that the number of errors increases with age and number of medicines prescribed.Conclusion: The results point to the establishment of medication error reporting at each hospital and to share the data with other hospitals. The role of clinical pharmacist in this situation appears to be a strong intervention; and the clinical pharmacist, initially, could confine to identification of the medication errors.

  10. Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation

    International Nuclear Information System (INIS)

    Bardsley, Johnathan M; Goldes, John

    2009-01-01

    In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness

  11. Total lymphocyte count and subpopulation lymphocyte counts in relation to dietary intake and nutritional status of peritoneal dialysis patients.

    Science.gov (United States)

    Grzegorzewska, Alicja E; Leander, Magdalena

    2005-01-01

    Dietary deficiency causes abnormalities in circulating lymphocyte counts. For the present paper, we evaluated correlations between total and subpopulation lymphocyte counts (TLC, SLCs) and parameters of nutrition in peritoneal dialysis (PD) patients. Studies were carried out in 55 patients treated with PD for 22.2 +/- 11.4 months. Parameters of nutritional status included total body mass, lean body mass (LBM), body mass index (BMI), and laboratory indices [total protein, albumin, iron, ferritin, and total iron binding capacity (TIBC)]. The SLCs were evaluated using flow cytometry. Positive correlations were seen between TLC and dietary intake of niacin; TLC and CD8 and CD16+56 counts and energy delivered from protein; CD4 count and beta-carotene and monounsaturated fatty acids 17:1 intake; and CD19 count and potassium, copper, vitamin A, and beta-carotene intake. Anorexia negatively influenced CD19 count. Serum albumin showed correlations with CD4 and CD19 counts, and LBM with CD19 count. A higher CD19 count was connected with a higher red blood cell count, hemoglobin, and hematocrit. Correlations were observed between TIBC and TLC and CD3 and CD8 counts, and between serum Fe and TLC and CD3 and CD4 counts. Patients with a higher CD19 count showed a better clinical-laboratory score, especially less weakness. Patients with a higher CD4 count had less expressed insomnia. Quantities of ingested vitamins and minerals influence lymphocyte counts in the peripheral blood of PD patients. Evaluation of TLC and SLCs is helpful in monitoring the effectiveness of nutrition in these patients.

  12. Measurement Rounding Errors in an Assessment Model of Project Led Engineering Education

    Directory of Open Access Journals (Sweden)

    Francisco Moreira

    2009-11-01

    Full Text Available This paper analyzes the rounding errors that occur in the assessment of an interdisciplinary Project-Led Education (PLE process implemented in the Integrated Master degree on Industrial Management and Engineering (IME at University of Minho. PLE is an innovative educational methodology which makes use of active learning, promoting higher levels of motivation and students’ autonomy. The assessment model is based on multiple evaluation components with different weights. Each component can be evaluated by several teachers involved in different Project Supporting Courses (PSC. This model can be affected by different types of errors, namely: (1 rounding errors, and (2 non-uniform criteria of rounding the grades. A rigorous analysis of the assessment model was made and the rounding errors involved on each project component were characterized and measured. This resulted in a global maximum error of 0.308 on the individual student project grade, in a 0 to 100 scale. This analysis intended to improve not only the reliability of the assessment results, but also teachers’ awareness of this problem. Recommendations are also made in order to improve the assessment model and reduce the rounding errors as much as possible.

  13. A Monte Carlo error simulation applied to calibration-free X-ray diffraction phase analysis

    International Nuclear Information System (INIS)

    Braun, G.E.

    1986-01-01

    Quantitative phase analysis of a system of n phases can be effected without the need for calibration standards provided at least n different mixtures of these phases are available. A series of linear equations relating diffracted X-ray intensities, weight fractions and quantitation factors coupled with mass balance relationships can be solved for the unknown weight fractions and factors. Uncertainties associated with the measured X-ray intensities, owing to counting of random X-ray quanta, are used to estimate the errors in the calculated parameters utilizing a Monte Carlo simulation. The Monte Carlo approach can be generalized and applied to any quantitative X-ray diffraction phase analysis method. Two examples utilizing mixtures of CaCO 3 , Fe 2 O 3 and CaF 2 with an α-SiO 2 (quartz) internal standard illustrate the quantitative method and corresponding error analysis. One example is well conditioned; the other is poorly conditioned and, therefore, very sensitive to errors in the measured intensities. (orig.)

  14. Comments on a derivation and application of the 'maximum entropy production' principle

    International Nuclear Information System (INIS)

    Grinstein, G; Linsker, R

    2007-01-01

    We show that (1) an error invalidates the derivation (Dewar 2005 J. Phys. A: Math. Gen. 38 L371) of the maximum entropy production (MaxEP) principle for systems far from equilibrium, for which the constitutive relations are nonlinear; and (2) the claim (Dewar 2003 J. Phys. A: Math. Gen. 36 631) that the phenomenon of 'self-organized criticality' is a consequence of MaxEP for slowly driven systems is unjustified. (comment)

  15. Novel high accurate sensorless dual-axis solar tracking system controlled by maximum power point tracking unit of photovoltaic systems

    International Nuclear Information System (INIS)

    Fathabadi, Hassan

    2016-01-01

    Highlights: • Novel high accurate sensorless dual-axis solar tracker. • It has the advantages of both sensor based and sensorless solar trackers. • It does not have the disadvantages of sensor based and sensorless solar trackers. • Tracking error of only 0.11° that is less than the tracking errors of others. • An increase of 28.8–43.6% depending on the seasons in the energy efficiency. - Abstract: In this study, a novel high accurate sensorless dual-axis solar tracker controlled by the maximum power point tracking unit available in almost all photovoltaic systems is proposed. The maximum power point tracking controller continuously calculates the maximum output power of the photovoltaic module/panel/array, and uses the altitude and azimuth angles deviations to track the sun direction where the greatest value of the maximum output power is extracted. Unlike all other sensorless solar trackers, the proposed solar tracking system is a closed loop system which means it uses the actual direction of the sun at any time to track the sun direction, and this is the contribution of this work. The proposed solar tracker has the advantages of both sensor based and sensorless dual-axis solar trackers, but it does not have their disadvantages. Other sensorless solar trackers all are open loop, i.e., they use offline estimated data about the sun path in the sky obtained from solar map equations, so low exactness, cloudy sky, and requiring new data for new location are their problems. A photovoltaic system has been built, and it is experimentally verified that the proposed solar tracking system tracks the sun direction with the tracking error of 0.11° which is less than the tracking errors of other both sensor based and sensorless solar trackers. An increase of 28.8–43.6% depending on the seasons in the energy efficiency is the main advantage of utilizing the proposed solar tracking system.

  16. Front-end counting mode electronics for CdZnTe sensor readout

    CERN Document Server

    Moraes, Danielle; Kaplon, Jan

    2004-01-01

    The development of a front-end circuit optimized for CdZnTe detector readout, implemented in 0.25 mu m CMOS technology, is reported. The ASIC comprises 17 channels of a charge sensitive amplifier with an active feedback, followed by a gain-shaper stage and a discriminator with a 5 bit fine-tune DAC. The signal from the discriminator is sensed by a 25 ns mono-stable circuit and an 18-bit static ripple- counter. The channel architecture is optimized for the detector characteristics in order to achieve the best energy resolution at a maximum counting rate of 2 million counts/second. The amplifier shows a linear sensitivity of 24 mV/fC with 50 ns peaking time and an equivalent noise charge of about 650 e/sup -/, for a detector capacitance of 10 pF. When connected to a 3*3*7 mm/sup 3/ CdZnTe detector the amplifier gain is about 8 mV/keV with a noise around 3.6 keV.

  17. Bayesian dynamic modeling of time series of dengue disease case counts.

    Science.gov (United States)

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful

  18. Rapid determination of strontium-90 in environmental samples by single Cerenkov counting using two different colour quench curves

    Energy Technology Data Exchange (ETDEWEB)

    Torres, J.M.; Garcia, J.F.; Llaurado, M.; Rauret, G. [Barcelona Univ. (Spain). Dept. de Quimica Analitica

    1996-11-01

    The validation of the Cerenkov radiation measurement of {sup 90}Y to determine the activity concentration of {sup 90}Sr in environmental samples is described. Liquid-liquid extraction with di-2-ethyhexylphosphoric acid in toluene was used to separate {sup 90}Y from {sup 90}Sr. Optimum conditions for Cerenkov counting (low-level counting option, counting windows, mass of solution to be measured) were established. The need for a counting efficiency correction by using a colour quench curve is stated to be essential, otherwise a significant error may occur. Two different colour quench curves (counting efficiency versus the channel ratio or spectral index parameter) were used and the results were compared. The method was applied to 12 environmental matrices: sea-water, algae, carobs, milk, almonds, hake, honey, shellfish, lamb meat, sardine, pork meat and shore sand. No significant differences were observed on using either of the two colour quench curves for any of these environmental matrices. In order to validate the proposed method, a certified soil reference material (CRM IAEA-375) was used, together with participation in an interlaboratory exercise to determine {sup 90}Sr in a natural water sample. Again, efficiency correction was performed by using either of the two colour quench curves and in both instances the calculated {sup 90}Sr activity concentration was in good agreement with the known values. (Author).

  19. Investigations on human error hazards in recent unintended trip events of Korean nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Jang, Tong Il; Lee, Yong Hee; Shin, Kwang Hyeon [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    According to the Operational Performance Information System (OPIS) which has been operated to improve the public understanding by the KINS (Korea Institute of Nuclear Safety), unintended trip events by mainly human errors counted up to 38 cases (18.7%) from 2000 to 2011. Although the Nuclear Power Plant (NPP) industry in Korea has been making efforts to reduce the human errors which have largely contributed to trip events, the human error rate might keep increasing. Interestingly, digital based I and C systems is the one of the reduction factors of unintended reactor trips. Human errors, however, have occurred due to the digital based I and C systems because those systems require new or changed behaviors to the NPP operators. Therefore, it is necessary that the investigations of human errors consider a new methodology to find not only tangible behavior but also intangible behavior such as organizational behaviors. In this study we investigated human errors to find latent factors such as decisions and conditions in the all of the unintended reactor trip events during last dozen years. To find them, we applied the HFACS (Human Factors Analysis and Classification System) which is a commonly utilized tool for investigating human contributions to aviation accidents under a widespread evaluation scheme. The objective of this study is to find latent factors behind of human errors in nuclear reactor trip events. Therefore, a method to investigate unintended trip events by human errors and the results will be discussed in more detail.

  20. Investigations on human error hazards in recent unintended trip events of Korean nuclear power plants

    International Nuclear Information System (INIS)

    Kim, Sa Kil; Jang, Tong Il; Lee, Yong Hee; Shin, Kwang Hyeon

    2012-01-01

    According to the Operational Performance Information System (OPIS) which has been operated to improve the public understanding by the KINS (Korea Institute of Nuclear Safety), unintended trip events by mainly human errors counted up to 38 cases (18.7%) from 2000 to 2011. Although the Nuclear Power Plant (NPP) industry in Korea has been making efforts to reduce the human errors which have largely contributed to trip events, the human error rate might keep increasing. Interestingly, digital based I and C systems is the one of the reduction factors of unintended reactor trips. Human errors, however, have occurred due to the digital based I and C systems because those systems require new or changed behaviors to the NPP operators. Therefore, it is necessary that the investigations of human errors consider a new methodology to find not only tangible behavior but also intangible behavior such as organizational behaviors. In this study we investigated human errors to find latent factors such as decisions and conditions in the all of the unintended reactor trip events during last dozen years. To find them, we applied the HFACS (Human Factors Analysis and Classification System) which is a commonly utilized tool for investigating human contributions to aviation accidents under a widespread evaluation scheme. The objective of this study is to find latent factors behind of human errors in nuclear reactor trip events. Therefore, a method to investigate unintended trip events by human errors and the results will be discussed in more detail

  1. Adaptive and automatic red blood cell counting method based on microscopic hyperspectral imaging technology

    Science.gov (United States)

    Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting

    2017-12-01

    Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.

  2. Probabilistic error bounds for reduced order modeling

    Energy Technology Data Exchange (ETDEWEB)

    Abdo, M.G.; Wang, C.; Abdel-Khalik, H.S., E-mail: abdo@purdue.edu, E-mail: wang1730@purdue.edu, E-mail: abdelkhalik@purdue.edu [Purdue Univ., School of Nuclear Engineering, West Lafayette, IN (United States)

    2015-07-01

    Reduced order modeling has proven to be an effective tool when repeated execution of reactor analysis codes is required. ROM operates on the assumption that the intrinsic dimensionality of the associated reactor physics models is sufficiently small when compared to the nominal dimensionality of the input and output data streams. By employing a truncation technique with roots in linear algebra matrix decomposition theory, ROM effectively discards all components of the input and output data that have negligible impact on reactor attributes of interest. This manuscript introduces a mathematical approach to quantify the errors resulting from the discarded ROM components. As supported by numerical experiments, the introduced analysis proves that the contribution of the discarded components could be upper-bounded with an overwhelmingly high probability. The reverse of this statement implies that the ROM algorithm can self-adapt to determine the level of the reduction needed such that the maximum resulting reduction error is below a given tolerance limit that is set by the user. (author)

  3. Penalised Maximum Likelihood Simultaneous Longitudinal PET Image Reconstruction with Difference-Image Priors.

    Science.gov (United States)

    Ellis, Sam; Reader, Andrew J

    2018-04-26

    reconstructions with increased counts levels. In tumour regions each method produces subtly different results in terms of preservation of tumour quantification and reconstruction root mean-squared error (RMSE). In particular, in the two-scan simulations, the DE-PML method produced tumour means in close agreement with MLEM reconstructions, while the DTV-PML method produced the lowest errors due to noise reduction within the tumour. Across a range of tumour responses and different numbers of scans, similar results were observed, with DTV-PML producing the lowest errors of the three priors and DE-PML producing the lowest bias. Similar improvements were observed in the reconstructions of the real longitudinal datasets, although imperfect alignment of the two PET images resulted in additional changes in the difference image that affected the performance of the proposed methods. Reconstruction of longitudinal datasets by penalising difference images between pairs of scans from a data series allows for noise reduction in all reconstructed images. An appropriate choice of penalty term and penalty strength allows for this noise reduction to be achieved while maintaining reconstruction performance in regions of change, either in terms of quantification of mean intensity via DE-PML, or in terms of tumour RMSE via DTV-PML. Overall, improving the image quality of longitudinal datasets via simultaneous reconstruction has the potential to improve upon currently used methods, allow dose reduction, or reduce scan time while maintaining image quality at current levels. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  4. Heterogeneous counting on filter support media

    International Nuclear Information System (INIS)

    Long, E.; Kohler, V.; Kelly, M.J.

    1976-01-01

    Many investigators in the biomedical research area have used filter paper as the support for radioactive samples. This means that a heterogeneous counting of sample sometimes results. The count rate of a sample on a filter will be affected by positioning, degree of dryness, sample application procedure, the type of filter, and the type of cocktail used. Positioning of the filter (up or down) in the counting vial can cause a variation of 35% or more when counting tritiated samples on filter paper. Samples of varying degrees of dryness when added to the counting cocktail can cause nonreproducible counts if handled improperly. Count rates starting at 2400 CPM initially can become 10,000 CPM in 24 hours for 3 H-DNA (deoxyribonucleic acid) samples dried on standard cellulose acetate membrane filters. Data on cellulose nitrate filters show a similar trend. Sample application procedures in which the sample is applied to the filter in a small spot or on a large amount of the surface area can cause nonreproducible or very low counting rates. A tritiated DNA sample, when applied topically, gives a count rate of 4,000 CPM. When the sample is spread over the whole filter, 13,400 CPM are obtained with a much better coefficient of variation (5% versus 20%). Adding protein carrier (bovine serum albumin-BSA) to the sample to trap more of the tritiated DNA on the filter during the filtration process causes a serious beta absorption problem. Count rates which are one-fourth the count rate applied to the filter are obtained on calibrated runs. Many of the problems encountered can be alleviated by a proper choice of filter and the use of a liquid scintillation cocktail which dissolves the filter. Filter-Solv has been used to dissolve cellulose nitrate filters and filters which are a combination of cellulose nitrate and cellulose acetate. Count rates obtained for these dissolved samples are very reproducible and highly efficient

  5. Errors of absolute methods of reactor neutron activation analysis caused by non-1/E epithermal neutron spectra

    International Nuclear Information System (INIS)

    Erdtmann, G.

    1993-08-01

    A sufficiently accurate characterization of the neutron flux and spectrum, i.e. the determination of the thermal flux, the flux ratio and the epithermal flux spectrum shape factor, α, is a prerequisite for all types of absolute and monostandard methods of reactor neutron activation analysis. A convenient method for these measurements is the bare triple monitor method. However, the results of this method, are very imprecise, because there are high error propagation factors form the counting errors of the monitor activities. Procedures are described to calculate the errors of the flux parameters, the α-dependent cross-section ratios, and of the analytical results from the errors of the activities of the monitor isotopes. They are included in FORTRAN programs which also allow a graphical representation of the results. A great number of examples were calculated for ten different irradiation facilities in four reactors and for 28 elements. Plots of the results are presented and discussed. (orig./HP) [de

  6. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  7. A Fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation

    International Nuclear Information System (INIS)

    Li, Haisen S.; Romeijn, H. Edwin; Dempsey, James F.

    2006-01-01

    We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near mono-energetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the

  8. LAWRENCE RADIATION LABORATORY COUNTING HANDBOOK

    Energy Technology Data Exchange (ETDEWEB)

    Group, Nuclear Instrumentation

    1966-10-01

    The Counting Handbook is a compilation of operational techniques and performance specifications on counting equipment in use at the Lawrence Radiation Laboratory, Berkeley. Counting notes have been written from the viewpoint of the user rather than that of the designer or maintenance man. The only maintenance instructions that have been included are those that can easily be performed by the experimenter to assure that the equipment is operating properly.

  9. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    Z. Rahnamaei

    2012-01-01

    Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  10. Maximum parsimony, substitution model, and probability phylogenetic trees.

    Science.gov (United States)

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  11. The Kruskal Count

    OpenAIRE

    Lagarias, Jeffrey C.; Rains, Eric; Vanderbei, Robert J.

    2001-01-01

    The Kruskal Count is a card trick invented by Martin J. Kruskal in which a magician "guesses" a card selected by a subject according to a certain counting procedure. With high probability the magician can correctly "guess" the card. The success of the trick is based on a mathematical principle related to coupling methods for Markov chains. This paper analyzes in detail two simplified variants of the trick and estimates the probability of success. The model predictions are compared with simula...

  12. Subdivision Error Analysis and Compensation for Photoelectric Angle Encoder in a Telescope Control System

    Directory of Open Access Journals (Sweden)

    Yanrui Su

    2015-01-01

    Full Text Available As the position sensor, photoelectric angle encoder affects the accuracy and stability of telescope control system (TCS. A TCS-based subdivision error compensation method for encoder is proposed. Six types of subdivision error sources are extracted through mathematical expressions of subdivision signals first. Then the period length relationships between subdivision signals and subdivision errors are deduced. And the error compensation algorithm only utilizing the shaft position of TCS is put forward, along with two control models; Model I is that the algorithm applies only to the speed loop of TCS and Model II is applied to both speed loop and position loop. Combined with actual project, elevation jittering phenomenon of the telescope is discussed to decide the necessity of DC-type subdivision error compensation. Low-speed elevation performance before and after error compensation is compared to help decide that Model II is preferred. In contrast to original performance, the maximum position error of the elevation with DC subdivision error compensation is reduced by approximately 47.9% from 1.42″ to 0.74″. The elevation gets a huge decrease in jitters. This method can compensate the encoder subdivision errors effectively and improve the stability of TCS.

  13. Error quantification of osteometric data in forensic anthropology.

    Science.gov (United States)

    Langley, Natalie R; Meadows Jantz, Lee; McNulty, Shauna; Maijanen, Heli; Ousley, Stephen D; Jantz, Richard L

    2018-04-10

    This study evaluates the reliability of osteometric data commonly used in forensic case analyses, with specific reference to the measurements in Data Collection Procedures 2.0 (DCP 2.0). Four observers took a set of 99 measurements four times on a sample of 50 skeletons (each measurement was taken 200 times by each observer). Two-way mixed ANOVAs and repeated measures ANOVAs with pairwise comparisons were used to examine interobserver (between-subjects) and intraobserver (within-subjects) variability. Relative technical error of measurement (TEM) was calculated for measurements with significant ANOVA results to examine the error among a single observer repeating a measurement multiple times (e.g. repeatability or intraobserver error), as well as the variability between multiple observers (interobserver error). Two general trends emerged from these analyses: (1) maximum lengths and breadths have the lowest error across the board (TEMForensic Skeletal Material, 3rd edition. Each measurement was examined carefully to determine the likely source of the error (e.g. data input, instrumentation, observer's method, or measurement definition). For several measurements (e.g. anterior sacral breadth, distal epiphyseal breadth of the tibia) only one observer differed significantly from the remaining observers, indicating a likely problem with the measurement definition as interpreted by that observer; these definitions were clarified in DCP 2.0 to eliminate this confusion. Other measurements were taken from landmarks that are difficult to locate consistently (e.g. pubis length, ischium length); these measurements were omitted from DCP 2.0. This manual is available for free download online (https://fac.utk.edu/wp-content/uploads/2016/03/DCP20_webversion.pdf), along with an accompanying instructional video (https://www.youtube.com/watch?v=BtkLFl3vim4). Copyright © 2018 Elsevier B.V. All rights reserved.

  14. SPERM COUNT DISTRIBUTIONS IN FERTILE MEN

    Science.gov (United States)

    Sperm concentration and count are often used as indicators of environmental impacts on male reproductive health. Existing clinical databases may be biased towards subfertile men with low sperm counts and less is known about expected sperm count distributions in cohorts of fertil...

  15. CLARO: an ASIC for high rate single photon counting with multi-anode photomultipliers

    Science.gov (United States)

    Baszczyk, M.; Carniti, P.; Cassina, L.; Cotta Ramusino, A.; Dorosz, P.; Fiorini, M.; Gotti, C.; Kucewicz, W.; Malaguti, R.; Pessina, G.

    2017-08-01

    The CLARO is a radiation-hard 8-channel ASIC designed for single photon counting with multi-anode photomultiplier tubes. Each channel outputs a digital pulse when the input signal from the photomultiplier crosses a configurable threshold. The fast return to baseline, typically within 25 ns, and below 50 ns in all conditions, allows to count up to 107 hits/s on each channel, with a power consumption of about 1 mW per channel. The ASIC presented here is a much improved version of the first 4-channel prototype. The threshold can be precisely set in a wide range, between 30 ke- (5 fC) and 16 Me- (2.6 pC). The noise of the amplifier with a 10 pF input capacitance is 3.5 ke- (0.6 fC) RMS. All settings are stored in a 128-bit configuration and status register, protected against soft errors with triple modular redundancy. The paper describes the design of the ASIC at transistor-level, and demonstrates its performance on the test bench.

  16. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  17. Random and systematic sampling error when hooking fish to monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden in Australian farmed yellowtail kingfish (Seriola lalandi).

    Science.gov (United States)

    Fensham, J R; Bubner, E; D'Antignana, T; Landos, M; Caraguel, C G B

    2018-05-01

    The Australian farmed yellowtail kingfish (Seriola lalandi, YTK) industry monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden by pooling the fluke count of 10 hooked YTK. The random and systematic error of this sampling strategy was evaluated to assess potential impact on treatment decisions. Fluke abundance (fluke count per fish) in a study cage (estimated 30,502 fish) was assessed five times using the current sampling protocol and its repeatability was estimated the repeatability coefficient (CR) and the coefficient of variation (CV). Individual body weight, fork length, fluke abundance, prevalence, intensity (fluke count per infested fish) and density (fluke count per Kg of fish) were compared between 100 hooked and 100 seined YTK (assumed representative of the entire population) to estimate potential selection bias. Depending on the fluke species and age category, CR (expected difference in parasite count between 2 sampling iterations) ranged from 0.78 to 114 flukes per fish. Capturing YTK by hooking increased the selection of fish of a weight and length in the lowest 5th percentile of the cage (RR = 5.75, 95% CI: 2.06-16.03, P-value = 0.0001). These lower end YTK had on average an extra 31 juveniles and 6 adults Z. seriolae per Kg of fish and an extra 3 juvenile and 0.4 adult B. seriolae per Kg of fish, compared to the rest of the cage population (P-value sampling towards the smallest and most heavily infested fish in the population, resulting in poor repeatability (more variability amongst sampled fish) and an overestimation of parasite burden in the population. In this particular commercial situation these finding supported that health management program, where the finding of an underestimation of parasite burden could provide a production impact on the study population. In instances where fish populations and parasite burdens are more homogenous, sampling error may be less severe. Sampling error when capturing fish

  18. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  19. Total bacterial count and somatic cell count in refrigerated raw milk stored in communal tanks

    Directory of Open Access Journals (Sweden)

    Edmar da Costa Alves

    2014-09-01

    Full Text Available The current industry demand for dairy products with extended shelf life has resulted in new challenges for milk quality maintenance. The processing of milk with high bacterial counts compromises the quality and performance of industrial products. The study aimed to evaluate the total bacteria counts (TBC and somatic cell count (SCC in 768 samples of refrigerated raw milk, from 32 communal tanks. Samples were collected in the first quarter of 2010, 2011, 2012 and 2013 and analyzed by the Laboratory of Milk Quality - LQL. Results showed that 62.5%, 37.5%, 15.6% and 27.1% of the means for TBC in 2010, 2011, 2012 and 2013, respectively, were above the values established by legislation. However, we observed a significant reduction in the levels of total bacterial count (TBC in the studied periods. For somatic cell count, 100% of the means indicated values below 600.000 cells/mL, complying with the actual Brazilian legislation. The values found for the somatic cell count suggests the adoption of effective measures for the sanitary control of the herd. However, the results must be considered with caution as it highlights the need for quality improvements of the raw material until it achieves reliable results effectively.

  20. Hypothetical Outcome Plots Outperform Error Bars and Violin Plots for Inferences about Reliability of Variable Ordering.

    Science.gov (United States)

    Hullman, Jessica; Resnick, Paul; Adar, Eytan

    2015-01-01

    Many visual depictions of probability distributions, such as error bars, are difficult for users to accurately interpret. We present and study an alternative representation, Hypothetical Outcome Plots (HOPs), that animates a finite set of individual draws. In contrast to the statistical background required to interpret many static representations of distributions, HOPs require relatively little background knowledge to interpret. Instead, HOPs enables viewers to infer properties of the distribution using mental processes like counting and integration. We conducted an experiment comparing HOPs to error bars and violin plots. With HOPs, people made much more accurate judgments about plots of two and three quantities. Accuracy was similar with all three representations for most questions about distributions of a single quantity.

  1. Usage of liquid scintillation counting for detecting the chemiluminescence of cells and its application in medicine

    International Nuclear Information System (INIS)

    Li Tianxing; Liang Qizhong; Zou Xiaowei; Yang Zhaohen; Huang Yong; Li Huaqiang

    1995-01-01

    The liquid scintillator counting-chemiluminescence (LSC-CL) of mono-photon radiance is a sensitive, handy and high-autoanalytic technique. Through measuring basic CL, dependent CL and maximum phagocytic CL of polymorphonuclear (PMN), we studied best factor levels of the method with orthogonal design [L 9 (3 4 )]. The results showed the peak forms changed markedly (inter-group P -4 M). PMN-CL in blood was measured during acute attack of the old patients with chronic bronchitis and the children with pneumonia bronchial. It was suggested that PMN phagocytosis decreased. So the dynamic analysis of maximum phagocytic CL would help us with the deep going clinical researches of the mechanisms of anti-inflammation and injuring by the oxygen free radicals

  2. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  3. Analysis of photon count data from single-molecule fluorescence experiments

    Science.gov (United States)

    Burzykowski, T.; Szubiakowski, J.; Rydén, T.

    2003-03-01

    We consider single-molecule fluorescence experiments with data in the form of counts of photons registered over multiple time-intervals. Based on the observation schemes, linking back to works by Dehmelt [Bull. Am. Phys. Soc. 20 (1975) 60] and Cook and Kimble [Phys. Rev. Lett. 54 (1985) 1023], we propose an analytical approach to the data based on the theory of Markov-modulated Poisson processes (MMPP). In particular, we consider maximum-likelihood estimation. The method is illustrated using a real-life dataset. Additionally, the properties of the proposed method are investigated through simulations and compared to two other approaches developed by Yip et al. [J. Phys. Chem. A 102 (1998) 7564] and Molski [Chem. Phys. Lett. 324 (2000) 301].

  4. Rainflow counting revisited

    Energy Technology Data Exchange (ETDEWEB)

    Soeker, H [Deutsches Windenergie-Institut (Germany)

    1996-09-01

    As state of the art method the rainflow counting technique is presently applied everywhere in fatigue analysis. However, the author feels that the potential of the technique is not fully recognized in wind energy industries as it is used, most of the times, as a mere data reduction technique disregarding some of the inherent information of the rainflow counting results. The ideas described in the following aim at exploitation of this information and making it available for use in the design and verification process. (au)

  5. Radon counting statistics - a Monte Carlo investigation

    International Nuclear Information System (INIS)

    Scott, A.G.

    1996-01-01

    Radioactive decay is a Poisson process, and so the Coefficient of Variation (COV) of open-quotes nclose quotes counts of a single nuclide is usually estimated as 1/√n. This is only true if the count duration is much shorter than the half-life of the nuclide. At longer count durations, the COV is smaller than the Poisson estimate. Most radon measurement methods count the alpha decays of 222 Rn, plus the progeny 218 Po and 214 Po, and estimate the 222 Rn activity from the sum of the counts. At long count durations, the chain decay of these nuclides means that every 222 Rn decay must be followed by two other alpha decays. The total number of decays is open-quotes 3Nclose quotes, where N is the number of radon decays, and the true COV of the radon concentration estimate is 1/√(N), √3 larger than the Poisson total count estimate of 1/√3N. Most count periods are comparable to the half lives of the progeny, so the relationship between COV and count time is complex. A Monte-Carlo estimate of the ratio of true COV to Poisson estimate was carried out for a range of count periods from 1 min to 16 h and three common radon measurement methods: liquid scintillation, scintillation cell, and electrostatic precipitation of progeny. The Poisson approximation underestimates COV by less than 20% for count durations of less than 60 min

  6. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    Science.gov (United States)

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  7. Hanford whole body counting manual

    International Nuclear Information System (INIS)

    Palmer, H.E.; Rieksts, G.A.; Lynch, T.P.

    1990-06-01

    This document describes the Hanford Whole Body Counting Program as it is administered by Pacific Northwest Laboratory (PNL) in support of the US Department of Energy--Richland Operations Office (DOE-RL) and its Hanford contractors. Program services include providing in vivo measurements of internally deposited radioactivity in Hanford employees (or visitors). Specific chapters of this manual deal with the following subjects: program operational charter, authority, administration, and practices, including interpreting applicable DOE Orders, regulations, and guidance into criteria for in vivo measurement frequency, etc., for the plant-wide whole body counting services; state-of-the-art facilities and equipment used to provide the best in vivo measurement results possible for the approximately 11,000 measurements made annually; procedures for performing the various in vivo measurements at the Whole Body Counter (WBC) and related facilities including whole body counts; operation and maintenance of counting equipment, quality assurance provisions of the program, WBC data processing functions, statistical aspects of in vivo measurements, and whole body counting records and associated guidance documents. 16 refs., 48 figs., 22 tabs

  8. Platelet Count and Plateletcrit

    African Journals Online (AJOL)

    strated that neonates with late onset sepsis (bacteremia after 3 days of age) had a dramatic increase in MPV and. PDW18. We hypothesize that as the MPV and PDW increase and platelet count and PCT decrease in sick children, intui- tively, the ratio of MPV to PCT; MPV to Platelet count,. PDW to PCT, PDW to platelet ...

  9. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  10. An Adaptive Smoother for Counting Measurements

    International Nuclear Information System (INIS)

    Kondrasovs Vladimir; Coulon Romain; Normand Stephane

    2013-06-01

    Counting measurements associated with nuclear instruments are tricky to carry out due to the stochastic process of the radioactivity. Indeed events counting have to be processed and filtered in order to display a stable count rate value and to allow variations monitoring in the measured activity. Smoothers (as the moving average) are adjusted by a time constant defined as a compromise between stability and response time. A new approach has been developed and consists in improving the response time while maintaining count rate stability. It uses the combination of a smoother together with a detection filter. A memory of counting data is processed to calculate several count rate estimates using several integration times. These estimates are then sorted into the memory from short to long integration times. A measurement position, in terms of integration time, is then chosen into this memory after a detection test. An inhomogeneity into the Poisson counting process is detected by comparison between current position estimate and the other estimates contained into the memory in respect with the associated statistical variance calculated with homogeneous assumption. The measurement position (historical time) and the ability to forget an obsolete data or to keep in memory a useful data are managed using the detection test result. The proposed smoother is then an adaptive and a learning algorithm allowing an optimization of the response time while maintaining measurement counting stability and converging efficiently to the best counting estimate after an effective change in activity. This algorithm has also the specificity to be low recursive and thus easily embedded into DSP electronics based on FPGA or micro-controllers meeting 'real life' time requirements. (authors)

  11. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  12. Analysis of counting data: Development of the SATLAS Python package

    Science.gov (United States)

    Gins, W.; de Groote, R. P.; Bissell, M. L.; Granados Buitrago, C.; Ferrer, R.; Lynch, K. M.; Neyens, G.; Sels, S.

    2018-01-01

    For the analysis of low-statistics counting experiments, a traditional nonlinear least squares minimization routine may not always provide correct parameter and uncertainty estimates due to the assumptions inherent in the algorithm(s). In response to this, a user-friendly Python package (SATLAS) was written to provide an easy interface between the data and a variety of minimization algorithms which are suited for analyzinglow, as well as high, statistics data. The advantage of this package is that it allows the user to define their own model function and then compare different minimization routines to determine the optimal parameter values and their respective (correlated) errors. Experimental validation of the different approaches in the package is done through analysis of hyperfine structure data of 203Fr gathered by the CRIS experiment at ISOLDE, CERN.

  13. An improved approach to reduce partial volume errors in brain SPET

    International Nuclear Information System (INIS)

    Hatton, R.L.; Hatton, B.F.; Michael, G.; Barnden, L.; QUT, Brisbane, QLD; The Queen Elizabeth Hospital, Adelaide, SA

    1999-01-01

    Full text: Limitations in SPET resolution give rise to significant partial volume error (PVE) in small brain structures We have investigated a previously published method (Muller-Gartner et al., J Cereb Blood Flow Metab 1992;16: 650-658) to correct PVE in grey matter using MRI. An MRI is registered and segmented to obtain a grey matter tissue volume which is then smoothed to obtain resolution matched to the corresponding SPET. By dividing the original SPET with this correction map, structures can be corrected for PVE on a pixel-by-pixel basis. Since this approach is limited by space-invariant filtering, modification was made by estimating projections for the segmented MRI and reconstructing these using identical parameters to SPET. The methods were tested on simulated brain scans, reconstructed with the ordered subsets EM algorithm (8,16, 32, 64 equivalent EM iterations) The new method provided better recovery visually. For 32 EM iterations, recovery coefficients were calculated for grey matter regions. The effects of potential errors in the method were examined. Mean recovery was unchanged with one pixel registration error, the maximum error found in most registration programs. Errors in segmentation > 2 pixels results in loss of accuracy for small structures. The method promises to be useful for reducing PVE in brain SPET

  14. Counting probe

    International Nuclear Information System (INIS)

    Matsumoto, Haruya; Kaya, Nobuyuki; Yuasa, Kazuhiro; Hayashi, Tomoaki

    1976-01-01

    Electron counting method has been devised and experimented for the purpose of measuring electron temperature and density, the most fundamental quantities to represent plasma conditions. Electron counting is a method to count the electrons in plasma directly by equipping a probe with the secondary electron multiplier. It has three advantages of adjustable sensitivity, high sensitivity of the secondary electron multiplier, and directional property. Sensitivity adjustment is performed by changing the size of collecting hole (pin hole) on the incident front of the multiplier. The probe is usable as a direct reading thermometer of electron temperature because it requires to collect very small amount of electrons, thus it doesn't disturb the surrounding plasma, and the narrow sweep width of the probe voltage is enough. Therefore it can measure anisotropy more sensitively than a Langmuir probe, and it can be used for very low density plasma. Though many problems remain on anisotropy, computer simulation has been carried out. Also it is planned to provide a Helmholtz coil in the vacuum chamber to eliminate the effect of earth magnetic field. In practical experiments, the measurement with a Langmuir probe and an emission probe mounted to the movable structure, the comparison with the results obtained in reverse magnetic field by using a Helmholtz coil, and the measurement of ionic sound wave are scheduled. (Wakatsuki, Y.)

  15. A mind you can count on: validating breath counting as a behavioral measure of mindfulness

    Directory of Open Access Journals (Sweden)

    Daniel B Levinson

    2014-10-01

    Full Text Available Mindfulness practice of present moment awareness promises many benefits, but has eluded rigorous behavioral measurement. To date, research has relied on self-reported mindfulness or heterogeneous mindfulness trainings to infer skillful mindfulness practice and its effects. In four independent studies with over 400 total participants, we present the first construct validation of a behavioral measure of mindfulness, breath counting. We found it was reliable, correlated with self-reported mindfulness, differentiated long-term meditators from age-matched controls, and was distinct from sustained attention and working memory measures. In addition, we employed breath counting to test the nomological network of mindfulness. As theorized, we found skill in breath counting associated with more meta-awareness, less mind wandering, better mood, and greater nonattachment (i.e. less attentional capture by distractors formerly paired with reward. We also found in a randomized online training study that 4 weeks of breath counting training improved mindfulness and decreased mind wandering relative to working memory training and no training controls. Together, these findings provide the first evidence for breath counting as a behavioral measure of mindfulness.

  16. Analyzing the installation angle error of a SAW torque sensor

    International Nuclear Information System (INIS)

    Fan, Yanping; Ji, Xiaojun; Cai, Ping

    2014-01-01

    When a torque is applied to a shaft, normal strain oriented at ±45° direction to the shaft axis is at its maximum, which requires two one-port SAW resonators to be bonded to the shaft at ±45° to the shaft axis. In order to make the SAW torque sensitivity high enough, the installation angle error of two SAW resonators must be confined within ±5° according to our design requirement. However, there are few studies devoted to the installation angle analysis of a SAW torque sensor presently and the angle error was usually obtained by a manual method. Hence, we propose an approximation method to analyze the angle error. First, according to the sensitive mechanism of the SAW device to torque, the SAW torque sensitivity is deduced based on the linear piezoelectric constitutive equation and the perturbation theory. Then, when a torque is applied to the tested shaft, the stress condition of two SAW resonators mounted with an angle deviating from ±45° to the shaft axis, is analyzed. The angle error is obtained by means of the torque sensitivities of two orthogonal SAW resonators. Finally, the torque measurement system is constructed and the loading and unloading experiments are performed twice. The torque sensitivities of two SAW resonators are obtained by applying average and least square method to the experimental results. Based on the derived angle error estimation function, the angle error is estimated about 3.447°, which is close to the actual angle error 2.915°. The difference between the estimated angle and the actual angle is discussed. The validity of the proposed angle error analysis method is testified to by the experimental results. (technical design note)

  17. Sensitivity of dose-finding studies to observation errors.

    Science.gov (United States)

    Zohar, Sarah; O'Quigley, John

    2009-11-01

    The purpose of Phase I designs is to estimate the MTD (maximum tolerated dose, in practice a dose with some given acceptable rate of toxicity) while, at the same time, minimizing the number of patients treated at doses too far removed from the MTD. Our purpose here is to investigate the sensitivity of conclusions from dose-finding designs to recording or observation errors. Certain toxicities may go undetected and, conversely, certain non-toxicities may be incorrectly recorded as dose-limiting toxicities. Recording inaccuracies would be expected to have an influence on final and within trial recommendations and, in this paper, we study in greater depth this question. We focus, in particular on three designs used currently; the standard '3+3' design, the grouped up-and-down design [M. Gezmu, N. Flournoy, Group up-and-down designs for dose finding. Journal of Statistical Planning and Inference 2006; 136 (6): 1749-1764.] and the continual reassessment method (CRM, [J. O'Quigley, M. Pepe, L. Fisher, Continual reassessment method: a practical design for phase 1 clinical trials in cancer. Biometrics 1990; 46 (1): 33-48.]). A non-toxicity incorrectly recorded as a toxicity (error of first kind) has a greater influence in general than the converse (error of second kind). These results are illustrated via figures which suggest that the standard '3+3' design in particular is sensitive to errors of the second kind. Such errors can have a very important impact on drug development in that, if carried through to the Phase 2 and Phase 3 studies, we can significantly increase the probability of failure to detect efficacy as a result of having delivered an inadequate dose.

  18. Principles of correlation counting

    International Nuclear Information System (INIS)

    Mueller, J.W.

    1975-01-01

    A review is given of the various applications which have been made of correlation techniques in the field of nuclear physics, in particular for absolute counting. Whereas in most cases the usual coincidence method will be preferable for its simplicity, correlation counting may be the only possible approach in such cases where the two radiations of the cascade cannot be well separated or when there is a longliving intermediate state. The measurement of half-lives and of count rates of spurious pulses is also briefly discussed. The various experimental situations lead to different ways the correlation method is best applied (covariance technique with one or with two detectors, application of correlation functions, etc.). Formulae are given for some simple model cases, neglecting dead-time corrections

  19. System for simultaneously measuring 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser.

    Science.gov (United States)

    Cui, Cunxing; Feng, Qibo; Zhang, Bin; Zhao, Yuqiong

    2016-03-21

    A novel method for simultaneously measuring six degree-of-freedom (6DOF) geometric motion errors is proposed in this paper, and the corresponding measurement instrument is developed. Simultaneous measurement of 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser is accomplished for the first time to the best of the authors' knowledge. Dual-frequency laser beams that are orthogonally linear polarized were adopted as the measuring datum. Positioning error measurement was achieved by heterodyne interferometry, and other 5DOF geometric motion errors were obtained by fiber collimation measurement. A series of experiments was performed to verify the effectiveness of the developed instrument. The experimental results showed that the stability and accuracy of the positioning error measurement are 31.1 nm and 0.5 μm, respectively. For the straightness error measurements, the stability and resolution are 60 and 40 nm, respectively, and the maximum deviation of repeatability is ± 0.15 μm in the x direction and ± 0.1 μm in the y direction. For pitch and yaw measurements, the stabilities are 0.03″ and 0.04″, the maximum deviations of repeatability are ± 0.18″ and ± 0.24″, and the accuracies are 0.4″ and 0.35″, respectively. The stability and resolution of roll measurement are 0.29″ and 0.2″, respectively, and the accuracy is 0.6″.

  20. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  1. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  2. An Improved Minimum Error Interpolator of CNC for General Curves Based on FPGA

    Directory of Open Access Journals (Sweden)

    Jiye HUANG

    2014-05-01

    Full Text Available This paper presents an improved minimum error interpolation algorithm for general curves generation in computer numerical control (CNC. Compared with the conventional interpolation algorithms such as the By-Point Comparison method, the Minimum- Error method and the Digital Differential Analyzer (DDA method, the proposed improved Minimum-Error interpolation algorithm can find a balance between accuracy and efficiency. The new algorithm is applicable for the curves of linear, circular, elliptical and parabolic. The proposed algorithm is realized on a field programmable gate array (FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe. The algorithm has the following advantages: firstly, the maximum interpolation error is only half of the minimum step-size; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  3. Total number of tillers of different accessions of Panicum maximum Jacq.

    Directory of Open Access Journals (Sweden)

    Thiago Perez Granato

    2012-12-01

    Full Text Available The productivity of forage grasses is due to continuous emission of leaves and tillers, ensuring the restoration of leaf área after cutting or grazing, thus ensuring the sustainability of forage. This study aimed to asses the total number tillers in different acessions of Panicum maximum Jacq. The experiment was carried in field belonging to the Instituto de Zootecnia located in Nova Odessa / SP. Evaluated two new accesses Panicum maximum, and two commercial cultivars. The cultivars tested were Aruana, Milenio, NO 2487, NO 78, and the two latter belonging to the Germoplasm Collection of the IZ. The experimental desing was in randomized complete block with four replications. The experimental area consisted of 16 plots of 10 m2 (5 x 2 m each. The experimental area was analyzed and according to the results, received dolomitic limestone corresponding 2t /ha, two months before the implementation of the experiment. Sowing was made by broad costing together with 80 kg/ha of P2O5 in the form of single superphosfate. After 60 days of implantation of the experiment it was a made a leveling of the plots to a height of about 15 cm. After this it was applied 250g of the 20-00-20 fertilizer/plot. Thirty days after the standardization it was evaluated the total number of tillers of the cultivars, using a metal frame of 0.5 x 0.5m which was thrown at random on each of the 16 plots, leaving one meter of each extremitly, and all tillers which were within the frame counted. After finished the counting of all tillers, the plots cut again at a height of approximately 15 cm. The second evaluation took place after thirty days, and it was again counted the total number of tillers following the same procedure. The results were analyzed by Tukey test at 5% after transforming the data to log(x. For the first evaluation there was no statistical difference in the total number of tillers between cultivars. But, in the second evaluation, the total number of tillers of NO 78

  4. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    Science.gov (United States)

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  5. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    Science.gov (United States)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  6. Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.

    Science.gov (United States)

    Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos

    2013-11-04

    In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.

  7. Photon-counting 1.0 GHz-phase-modulation fluorometer

    International Nuclear Information System (INIS)

    Mizuno, T.; Nakao, S.; Mizutani, Y.; Iwata, T.

    2015-01-01

    We have constructed an improved version of a photon-counting phase-modulation fluorometer (PC-PMF) with a maximum modulation frequency of 1.0 GHz, where a phase domain measurement is conducted with a time-correlated single-photon-counting electronics. While the basic concept of the PC-PMF has been reported previously by one of the authors, little attention has been paid to its significance, other than its weak fluorescence measurement capability. Recently, we have recognized the importance of the PC-PMF and its potential for fluorescence lifetime measurements. One important aspect of the PC-PMF is that it enables us to perform high-speed measurements that exceed the frequency bandwidths of the photomultiplier tubes that are commonly used as fluorescence detectors. We describe the advantages of the PC-PMF and demonstrate its usefulness based on fundamental performance tests. In our new version of the PC-PMF, we have used a laser diode (LD) as an excitation light source rather than the light-emitting diode that was used in the primary version. We have also designed a simple and stable LD driver to modulate the device. Additionally, we have obtained a sinusoidal histogram waveform that has multiple cycles within a time span to be measured, which is indispensable for precise phase measurements. With focus on the fluorescence intensity and the resolution time, we have compared the performance of the PC-PMF with that of a conventional PMF using the analogue light detection method

  8. Photon-counting 1.0 GHz-phase-modulation fluorometer

    Energy Technology Data Exchange (ETDEWEB)

    Mizuno, T.; Nakao, S.; Mizutani, Y.; Iwata, T., E-mail: iwata@tokushima-u.ac.jp [Division of Energy System, Institute of Technology and Science, Tokushima University, 2-1 Minami-Jyosanjima, Tokushima 770-8506 (Japan)

    2015-04-15

    We have constructed an improved version of a photon-counting phase-modulation fluorometer (PC-PMF) with a maximum modulation frequency of 1.0 GHz, where a phase domain measurement is conducted with a time-correlated single-photon-counting electronics. While the basic concept of the PC-PMF has been reported previously by one of the authors, little attention has been paid to its significance, other than its weak fluorescence measurement capability. Recently, we have recognized the importance of the PC-PMF and its potential for fluorescence lifetime measurements. One important aspect of the PC-PMF is that it enables us to perform high-speed measurements that exceed the frequency bandwidths of the photomultiplier tubes that are commonly used as fluorescence detectors. We describe the advantages of the PC-PMF and demonstrate its usefulness based on fundamental performance tests. In our new version of the PC-PMF, we have used a laser diode (LD) as an excitation light source rather than the light-emitting diode that was used in the primary version. We have also designed a simple and stable LD driver to modulate the device. Additionally, we have obtained a sinusoidal histogram waveform that has multiple cycles within a time span to be measured, which is indispensable for precise phase measurements. With focus on the fluorescence intensity and the resolution time, we have compared the performance of the PC-PMF with that of a conventional PMF using the analogue light detection method.

  9. Effect of asymmetrical transfer coefficients of a non-polarizing beam splitter on the nonlinear error of the polarization interferometer

    Science.gov (United States)

    Zhao, Chen-Guang; Tan, Jiu-Bin; Liu, Tao

    2010-09-01

    The mechanism of a non-polarizing beam splitter (NPBS) with asymmetrical transfer coefficients causing the rotation of polarization direction is explained in principle, and the measurement nonlinear error caused by NPBS is analyzed based on Jones matrix theory. Theoretical calculations show that the nonlinear error changes periodically, and the error period and peak values increase with the deviation between transmissivities of p-polarization and s-polarization states. When the transmissivity of p-polarization is 53% and that of s-polarization is 48%, the maximum error reaches 2.7 nm. The imperfection of NPBS is one of the main error sources in simultaneous phase-shifting polarization interferometer, and its influence can not be neglected in the nanoscale ultra-precision measurement.

  10. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  11. Delta count-rate monitoring system

    International Nuclear Information System (INIS)

    Van Etten, D.; Olsen, W.A.

    1985-01-01

    A need for a more effective way to rapidly search for gamma-ray contamination over large areas led to the design and construction of a very sensitive gamma detection system. The delta count-rate monitoring system was installed in a four-wheel-drive van instrumented for environmental surveillance and accident response. The system consists of four main sections: (1) two scintillation detectors, (2) high-voltage power supply amplifier and single-channel analyzer, (3) delta count-rate monitor, and (4) count-rate meter and recorder. The van's 6.5-kW generator powers the standard nuclear instrument modular design system. The two detectors are mounted in the rear corners of the van and can be run singly or jointly. A solid-state bar-graph count-rate meter mounted on the dashboard can be read easily by both the driver and passenger. A solid-state strip chart recorder shows trends and provides a permanent record of the data. An audible alarm is sounded at the delta monitor and at the dashboard count-rate meter if a detected radiation level exceeds the set background level by a predetermined amount

  12. Minimization of the effect of errors in approximate radiation view factors

    International Nuclear Information System (INIS)

    Clarksean, R.; Solbrig, C.

    1993-01-01

    The maximum temperature of irradiated fuel rods in storage containers was investigated taking credit only for radiation heat transfer. Estimating view factors is often easy but in many references the emphasis is placed on calculating the quadruple integrals exactly. Selecting different view factors in the view factor matrix as independent, yield somewhat different view factor matrices. In this study ten to twenty percent error in view factors produced small errors in the temperature which are well within the uncertainty due to the surface emissivities uncertainty. However, the enclosure and reciprocity principles must be strictly observed or large errors in the temperatures and wall heat flux were observed (up to a factor of 3). More than just being an aid for calculating the dependent view factors, satisfying these principles, particularly reciprocity, is more important than the calculation accuracy of the view factors. Comparison to experiment showed that the result of the radiation calculation was definitely conservative as desired in spite of the approximations to the view factors

  13. Probing dark energy with cluster counts and cosmic shear power spectra: including the full covariance

    International Nuclear Information System (INIS)

    Takada, Masahiro; Bridle, Sarah

    2007-01-01

    Several dark energy experiments are available from a single large-area imaging survey and may be combined to improve cosmological parameter constraints and/or test inherent systematics. Two promising experiments are cosmic shear power spectra and counts of galaxy clusters. However, the two experiments probe the same cosmic mass density field in large-scale structure, therefore the combination may be less powerful than first thought. We investigate the cross-covariance between the cosmic shear power spectra and the cluster counts based on the halo model approach, where the cross-covariance arises from the three-point correlations of the underlying mass density field. Fully taking into account the cross-covariance, as well as non-Gaussian errors on the lensing power spectrum covariance, we find a significant cross-correlation between the lensing power spectrum signals at multipoles l∼10 3 and the cluster counts containing halos with masses M∼>10 14 M o-dot . Including the cross-covariance for the combined measurement degrades and in some cases improves the total signal-to-noise (S/N) ratios up to ∼±20% relative to when the two are independent. For cosmological parameter determination, the cross-covariance has a smaller effect as a result of working in a multi-dimensional parameter space, implying that the two observables can be considered independent to a good approximation. We also discuss the fact that cluster count experiments using lensing-selected mass peaks could be more complementary to cosmic shear tomography than mass-selected cluster counts of the corresponding mass threshold. Using lensing selected clusters with a realistic usable detection threshold ((S/N) cluster ∼6 for a ground-based survey), the uncertainty on each dark energy parameter may be roughly halved by the combined experiments, relative to using the power spectra alone

  14. DC KIDS COUNT e-Databook Indicators

    Science.gov (United States)

    DC Action for Children, 2012

    2012-01-01

    This report presents indicators that are included in DC Action for Children's 2012 KIDS COUNT e-databook, their definitions and sources and the rationale for their selection. The indicators for DC KIDS COUNT represent a mix of traditional KIDS COUNT indicators of child well-being, such as the number of children living in poverty, and indicators of…

  15. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  16. In vivo counting of uranium

    International Nuclear Information System (INIS)

    Palmer, H.E.

    1985-03-01

    A state-of-the-art radiation detector system consisting of six individually mounted intrinsic germanium planar detectors, each 20 cm 2 by 13 mm thick, mounted together such that the angle of the whole system can be changed to match the slope of the chest of the person being counted, is described. The sensitivity of the system for counting uranium and plutonium in vivo and the precedures used in calibrating the system are also described. Some results of counts done on uranium mill workers are presented. 15 figs., 2 tabs

  17. A high count rate position decoding and energy measuring method for nuclear cameras using Anger logic detectors

    International Nuclear Information System (INIS)

    Wong, W.H.; Li, H.; Uribe, J.

    1998-01-01

    A new method for processing signals from Anger position-sensitive detectors used in gamma cameras and PET is proposed for very high count-rate imaging where multiple-event pileups are the norm. This method is designed to sort out and recover every impinging event from multiple-event pileups while maximizing the collection of scintillation signal for every event to achieve optimal accuracy in the measurement of energy and position. For every detected event, this method cancels the remnant signals from previous events, and excludes the pileup of signals from following events. The remnant subtraction is exact even for multiple pileup events. A prototype circuit for energy recovery demonstrated that the maximum count rates can be increased by more than 10 times comparing to the pulse-shaping method, and the energy resolution is as good as pulse shaping (or fixed integration) at low count rates. At 2 x 10 6 events/sec on NaI(Tl), the true counts acquired with this method is 3.3 times more than the delay-line clipping method (256 ns clipping) due to events recovered from pileups. Pulse-height spectra up to 3.5 x 10 6 events/sec have been studied. Monte Carlo simulation studies have been performed for image-quality comparisons between different processing methods

  18. Artificial Intelligence as a Business Forecasting and Error Handling Tool

    OpenAIRE

    Md. Tabrez Quasim; Rupak Chattopadhyay

    2015-01-01

     Any business enterprise must rely a lot on how well it can predict the future happenings. To cope up with the modern global customer demand, technological challenges, market competitions etc., any organization is compelled to foresee the future having maximum impact and least chances of errors. The traditional forecasting approaches have some limitations. That is why the business world is adopting the modern Artificial Intelligence based forecasting techniques. This paper has tried to presen...

  19. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  20. Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.

    Science.gov (United States)

    Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D

    2016-10-01

    Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  1. Characterization of photon-counting multislit breast tomosynthesis.

    Science.gov (United States)

    Berggren, Karl; Cederström, Björn; Lundqvist, Mats; Fredenberg, Erik

    2018-02-01

    It has been shown that breast tomosynthesis may improve sensitivity and specificity compared to two-dimensional mammography, resulting in increased detection-rate of cancers or lowered call-back rates. The purpose of this study is to characterize a spectral photon-counting multislit breast tomosynthesis system that is able to do single-scan spectral imaging with multiple collimated x-ray beams. The system differs in many aspects compared to conventional tomosynthesis using energy-integrating flat-panel detectors. The investigated system was a prototype consisting of a dual-threshold photon-counting detector with 21 collimated line detectors scanning across the compressed breast. A review of the system is done in terms of detector, acquisition geometry, and reconstruction methods. Three reconstruction methods were used, simple back-projection, filtered back-projection and an iterative algebraic reconstruction technique. The image quality was evaluated by measuring the modulation transfer-function (MTF), normalized noise-power spectrum, detective quantum-efficiency (DQE), and artifact spread-function (ASF) on reconstructed spectral tomosynthesis images for a total-energy bin (defined by a low-energy threshold calibrated to remove electronic noise) and for a high-energy bin (with a threshold calibrated to split the spectrum in roughly equal parts). Acquisition was performed using a 29 kVp W/Al x-ray spectrum at a 0.24 mGy exposure. The difference in MTF between the two energy bins was negligible, that is, there was no energy dependence on resolution. The MTF dropped to 50% at 1.5 lp/mm to 2.3 lp/mm in the scan direction and 2.4 lp/mm to 3.3 lp/mm in the slit direction, depending on the reconstruction method. The full width at half maximum of the ASF was found to range from 13.8 mm to 18.0 mm for the different reconstruction methods. The zero-frequency DQE of the system was found to be 0.72. The fraction of counts in the high-energy bin was measured to be 59% of the

  2. EM Adaptive LASSO – A Multilocus Modeling Strategy for Detecting SNPs Associated With Zero-inflated Count Phenotypes

    Directory of Open Access Journals (Sweden)

    Himel eMallick

    2016-03-01

    Full Text Available Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP, Negative Binomial, and Zero-inflated Negative Binomial (ZINB. However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely

  3. Alpha scintillation radon counting

    International Nuclear Information System (INIS)

    Lucas, H.F. Jr.

    1977-01-01

    Radon counting chambers which utilize the alpha-scintillation properties of silver activated zinc sulfide are simple to construct, have a high efficiency, and, with proper design, may be relatively insensitive to variations in the pressure or purity of the counter filling. Chambers which were constructed from glass, metal, or plastic in a wide variety of shapes and sizes were evaluated for the accuracy and the precision of the radon counting. The principles affecting the alpha-scintillation radon counting chamber design and an analytic system suitable for a large scale study of the 222 Rn and 226 Ra content of either air or other environmental samples are described. Particular note is taken of those factors which affect the accuracy and the precision of the method for monitoring radioactivity around uranium mines

  4. Ovarian volume and antral follicle count assessed by MRI and transvaginal ultrasonography: a methodological study.

    Science.gov (United States)

    Leonhardt, Henrik; Gull, Berit; Stener-Victorin, Elisabet; Hellström, Mikael

    2014-03-01

    Ultrasonographic measurements of ovarian volume and antral follicle count are of clinical importance as diagnostic features of polycystic ovarian syndrome (PCOS), and as a parameter in estimation of ovarian follicular reserve in infertility care. To compare two-dimensional (2D)/three-dimensional (3D) transvaginal ultrasonography (TVUS) and magnetic resonance imaging (MRI) for estimation of ovarian volume and antral follicle count, and to assess reproducibility and inter-observer agreement of MRI measurements. Volumes of 172 ovaries in 99 women aged 21-37 years were calculated (length x width x height x 0.523) with conventional 2D TVUS and 2D MRI. Semi-automatic estimates of ovarian volumes were obtained by 3D MRI. Antral follicles were counted manually on 2D MRI and automatically by 3D TVUS (SonoAVC), and stratified according to follicle size. Mean ovarian volume assessed by 2D TVUS (13.1 ± 6.4 mL) was larger than assessed by 2D MRI (9.6 ± 4.1) and 3D MRI (11.4 ± 4.5) (P 0.77. 2D MRI reveals more antral follicles, especially of small size, than 3D TVUS. Ovarian volume estimation by MRI provides smaller volumes than by the reference standard 2D TVUS. Ovarian volume estimation by 3D MRI, allowing independence of non-ellipsoid ovarian shape measurement errors, provides volumes closer to 2D TVUS values than does 2D MRI. Reproducibility and inter-observer agreement of 2D MRI measurements of ovarian volume and total follicle count are good.

  5. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  6. Effect of dopamine injection on the hemocyte count and prophenoloxidase system of the white shrimp Litopenaeus vannamei

    Science.gov (United States)

    Pan, Luqing; Hu, Fawen; Zheng, Debin

    2011-09-01

    Effects of dopamine injection on the hemocyte count, phenoloxidase activity, serine proteinase activity, proteinase inhibitor activity and α2-macroglobulin-like activity in L. vannamei were studied. Results showed that dopamine injection resulted in a significant effect on the parameters measured ( P < 0.05), while no significant difference was observed in the control group (0.85% NaCl). In the experimental groups, the hemocyte count reached the minimum in 3 h; granular and semi-granular cells became stable after 12 h and hyaline cells and the total hemocyte count became stable after 18 h. Phenoloxidase activity reached the minimum in 6 h, and then became stable after 9 h. Serine protease activity and proteinase inhibitor activity reached the minimum in 3 h, and α2-macroglobulin-like activity reached the maximum in 3 h, and all the three parameters became stable after 12 h. The results suggest that the activating mechanisms of the proPO system triggered by dopamine are different from those triggered by invasive agents or spontaneously activated under a normal physical condition.

  7. Hypothetical Outcome Plots Outperform Error Bars and Violin Plots for Inferences about Reliability of Variable Ordering.

    Directory of Open Access Journals (Sweden)

    Jessica Hullman

    Full Text Available Many visual depictions of probability distributions, such as error bars, are difficult for users to accurately interpret. We present and study an alternative representation, Hypothetical Outcome Plots (HOPs, that animates a finite set of individual draws. In contrast to the statistical background required to interpret many static representations of distributions, HOPs require relatively little background knowledge to interpret. Instead, HOPs enables viewers to infer properties of the distribution using mental processes like counting and integration. We conducted an experiment comparing HOPs to error bars and violin plots. With HOPs, people made much more accurate judgments about plots of two and three quantities. Accuracy was similar with all three representations for most questions about distributions of a single quantity.

  8. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    Science.gov (United States)

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  9. Correlation between discharged worms and fecal egg counts in human clonorchiasis.

    Directory of Open Access Journals (Sweden)

    Jae-Hwan Kim

    2011-10-01

    Full Text Available BACKGROUND: Stool examination by counting eggs per gram of feces (EPGs is the best method to estimate worm burden of Clonorchis sinensis in infected humans. The present study investigated a correlation between EPGs and worm burden in human clonorchiasis. METHODS AND FINDINGS: A total of 60 residents, 50 egg-positive and 10 egg-negative, in Sancheong-gun, Korea, participated in this worm collection trial in 2006-2009. They were diagnosed by egg positivity in feces using the Kato-Katz method. After administration of praziquantel, they were purged with cathartics on the next day, and then discharged adult worms were collected from their feces. Their EPGs ranged from 0 to 65,544. Adult worms of C. sinensis were collected from 17 egg-positive cases, and the number of worms ranged from 1 to 114 in each individual. A positive correlation between EPGs and numbers of worms was demonstrated (r = 0.681, P<0.001. Worm recovery rates were 9.7% in cases of EPGs 1-1,000 and 73.7% in those of EPGs over 1,000. No worms were detected from egg-negative subjects. Maximum egg count per worm per day was roughly estimated 3,770 in a subject with EPGs 2,664 and 106 collected worms. CONCLUSIONS: The numbers of the worms are significantly correlated with the egg counts in human clonorchiasis. It is estimated that at least 110 worms are infected in a human body with EPGs around 3,000, and egg productivity of a worm per day is around 4,000.

  10. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  11. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  12. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  13. Blood Count Tests: MedlinePlus Health Topic

    Science.gov (United States)

    ... Spanish WBC count (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Blood Count Tests ... WBC count Show More Show Less Related Health Topics Bleeding Disorders Blood Laboratory Tests National Institutes of ...

  14. Evaluation of lactate, white blood cell count, neutrophil count, procalcitonin and immature granulocyte count as biomarkers for sepsis in emergency department patients.

    Science.gov (United States)

    Karon, Brad S; Tolan, Nicole V; Wockenfus, Amy M; Block, Darci R; Baumann, Nikola A; Bryant, Sandra C; Clements, Casey M

    2017-11-01

    Lactate, white blood cell (WBC) and neutrophil count, procalcitonin and immature granulocyte (IG) count were compared for the prediction of sepsis, and severe sepsis or septic shock, in patients presenting to the emergency department (ED). We prospectively enrolled 501 ED patients with a sepsis panel ordered for suspicion of sepsis. WBC, neutrophil, and IG counts were measured on a Sysmex XT-2000i analyzer. Lactate was measured by i-STAT, and procalcitonin by Brahms Kryptor. We classified patients as having sepsis using a simplification of the 1992 consensus conference sepsis definitions. Patients with sepsis were further classified as having severe sepsis or septic shock using established criteria. Univariate receiver operating characteristic (ROC) analysis was performed to determine odds ratio (OR), area under the ROC curve (AUC), and sensitivity/specificity at optimal cut-off for prediction of sepsis (vs. no sepsis), and prediction of severe sepsis or septic shock (vs. no sepsis). There were 267 patients without sepsis; and 234 with sepsis, including 35 patients with severe sepsis or septic shock. Lactate had the highest OR (1.44, 95th% CI 1.20-1.73) for the prediction of sepsis; while WBC, neutrophil count and percent (neutrophil/WBC) had OR>1.00 (psepsis or septic shock, with an odds ratio (95th% CI) of 2.70 (2.02-3.61) and AUC 0.89 (0.82-0.96). Traditional biomarkers (lactate, WBC, neutrophil count, procalcitonin, IG) have limited utility in the prediction of sepsis. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  15. Serum Copper Level Significantly Influences Platelet Count, Lymphocyte Count and Mean Cell Hemoglobin in Sickle Cell Anemia

    Directory of Open Access Journals (Sweden)

    Okocha Chide

    2015-12-01

    Full Text Available Background Changes in serum micro nutrients levels affect a number of critically important metabolic processes; these could potentially influence blood counts and ultimately disease presentation in patients with sickle cell anemia (SCA. Objectives To evaluate the influence of serum micro-nutrients levels; zinc, copper, selenium and magnesium on blood counts in steady state SCA patients. Methods A cross sectional study that involved 28 steady state adult SCA subjects. Seven milliliters (mls of blood was collected; 3 mls was for hemoglobin electrophoresis and full blood count determination while 4 mls was for measurement of serum micro nutrients levels, by the atomic absorption spectrophotometry. Correlation between serum micro-nutrient levels and blood counts was done by the Pearson’s linear regression. Ethical approval was obtained from the institutional review board and each participant gave informed consent. All data was analyzed by SPSS software version 20. Results There was a significant correlation between serum copper levels and mean cell hemoglobin (MCH, platelet and lymphocyte counts (r = 0.418; P = 0.02, r = -0.376; P = 0.04 and r = -0.383; P = 0.04, respectively. There were no significant correlations between serum levels of other micro nutrients (selenium, zinc and magnesium and blood counts. Conclusions Copper influences blood count in SCA patients probably by inducing red cell haemolysis, oxidant tissue damage and stimulating the immune system.

  16. Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements

    Directory of Open Access Journals (Sweden)

    Simón Ruiz

    2002-12-01

    Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.

  17. SU-E-P-21: Impact of MLC Position Errors On Simultaneous Integrated Boost Intensity-Modulated Radiotherapy for Nasopharyngeal Carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Chengqiang, L; Yin, Y; Chen, L [Shandong Cancer Hospital and Institute, 440 Jiyan Road, Jinan, 250117 (China)

    2015-06-15

    Purpose: To investigate the impact of MLC position errors on simultaneous integrated boost intensity-modulated radiotherapy (SIB-IMRT) for patients with nasopharyngeal carcinoma. Methods: To compare the dosimetric differences between the simulated plans and the clinical plans, ten patients with locally advanced NPC treated with SIB-IMRT were enrolled in this study. All plans were calculated with an inverse planning system (Pinnacle3, Philips Medical System{sub )}. Random errors −2mm to 2mm{sub )},shift errors{sub (} 2mm,1mm and 0.5mm) and systematic extension/ contraction errors (±2mm, ±1mm and ±0.5mm) of the MLC leaf position were introduced respectively into the original plans to create the simulated plans. Dosimetry factors were compared between the original and the simulated plans. Results: The dosimetric impact of the random and system shift errors of MLC position was insignificant within 2mm, the maximum changes in D95% of PGTV,PTV1,PTV2 were-0.92±0.51%,1.00±0.24% and 0.62±0.17%, the maximum changes in the D0.1cc of spinal cord and brainstem were 1.90±2.80% and −1.78±1.42%, the maximum changes in the Dmean of parotids were1.36±1.23% and −2.25±2.04%.However,the impact of MLC extension or contraction errors was found significant. For 2mm leaf extension errors, the average changes in D95% of PGTV,PTV1,PTV2 were 4.31±0.67%,4.29±0.65% and 4.79±0.82%, the averaged value of the D0.1cc to spinal cord and brainstem were increased by 7.39±5.25% and 6.32±2.28%,the averaged value of the mean dose to left and right parotid were increased by 12.75±2.02%,13.39±2.17% respectively. Conclusion: The dosimetric effect was insignificant for random MLC leaf position errors up to 2mm. There was a high sensitivity to dose distribution for MLC extension or contraction errors.We should pay attention to the anatomic changes in target organs and anatomical structures during the course,individual radiotherapy was recommended to ensure adaptive doses.

  18. Automatic detection and counting of cattle in UAV imagery based on machine vision technology (Conference Presentation)

    Science.gov (United States)

    Rahnemoonfar, Maryam; Foster, Jamie; Starek, Michael J.

    2017-05-01

    Beef production is the main agricultural industry in Texas, and livestock are managed in pasture and rangeland which are usually huge in size, and are not easily accessible by vehicles. The current research method for livestock location identification and counting is visual observation which is very time consuming and costly. For animals on large tracts of land, manned aircraft may be necessary to count animals which is noisy and disturbs the animals, and may introduce a source of error in counts. Such manual approaches are expensive, slow and labor intensive. In this paper we study the combination of small unmanned aerial vehicle (sUAV) and machine vision technology as a valuable solution to manual animal surveying. A fixed-wing UAV fitted with GPS and digital RGB camera for photogrammetry was flown at the Welder Wildlife Foundation in Sinton, TX. Over 600 acres were flown with four UAS flights and individual photographs used to develop orthomosaic imagery. To detect animals in UAV imagery, a fully automatic technique was developed based on spatial and spectral characteristics of objects. This automatic technique can even detect small animals that are partially occluded by bushes. Experimental results in comparison to ground-truth show the effectiveness of our algorithm.

  19. PERFORMANCE OF OPPORTUNISTIC SPECTRUM ACCESS WITH SENSING ERROR IN COGNITIVE RADIO AD HOC NETWORKS

    Directory of Open Access Journals (Sweden)

    N. ARMI

    2012-04-01

    Full Text Available Sensing in opportunistic spectrum access (OSA has a responsibility to detect the available channel by performing binary hypothesis as busy or idle states. If channel is busy, secondary user (SU cannot access and refrain from data transmission. SU is allowed to access when primary user (PU does not use it (idle states. However, channel is sensed on imperfect communication link. Fading, noise and any obstacles existed can cause sensing errors in PU signal detection. False alarm detects idle states as a busy channel while miss-identification detects busy states as an idle channel. False detection makes SU refrain from transmission and reduces number of bits transmitted. On the other hand, miss-identification causes SU collide to PU transmission. This paper study the performance of OSA based on the greedy approach with sensing errors by the restriction of maximum collision probability allowed (collision threshold by PU network. The throughput of SU and spectrum capacity metric is used to evaluate OSA performance and make comparisons to those ones without sensing error as function of number of slot based on the greedy approach. The relations between throughput and signal to noise ratio (SNR with different collision probability as well as false detection with different SNR are presented. According to the obtained results show that CR users can gain the reward from the previous slot for both of with and without sensing errors. It is indicated by the throughput improvement as slot number increases. However, sensing on imperfect channel with sensing errors can degrade the throughput performance. Subsequently, the throughput of SU and spectrum capacity improves by increasing maximum collision probability allowed by PU network as well. Due to frequent collision with PU, the throughput of SU and spectrum capacity decreases at certain value of collision threshold. Computer simulation is used to evaluate and validate these works.

  20. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  1. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  2. Interpretation of galaxy counts

    International Nuclear Information System (INIS)

    Tinsely, B.M.

    1980-01-01

    New models are presented for the interpretation of recent counts of galaxies to 24th magnitude, and predictions are shown to 28th magnitude for future comparison with data from the Space Telescope. The results supersede earlier, more schematic models by the author. Tyson and Jarvis found in their counts a ''local'' density enhancement at 17th magnitude, on comparison with the earlier models; the excess is no longer significant when a more realistic mixture of galaxy colors is used. Bruzual and Kron's conclusion that Kron's counts show evidence for evolution at faint magnitudes is confirmed, and it is predicted that some 23d magnitude galaxies have redshifts greater than unity. These may include spheroidal systems, elliptical galaxies, and the bulges of early-type spirals and S0's, seen during their primeval rapid star formation

  3. SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error

    Energy Technology Data Exchange (ETDEWEB)

    Ju, S; Hong, C; Kim, M; Chung, K; Kim, J; Han, Y; Ahn, S; Chung, S; Shin, E; Shin, J; Kim, H; Kim, D; Choi, D [Department of Radiation Oncology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2014-06-01

    Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed without the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.

  4. Inherent Error in Asynchronous Digital Flight Controls.

    Science.gov (United States)

    1980-02-01

    operation will be eliminated. If T* is close to T, the inherent error (eA) is a small value. Then the deficiency of the basic model, which is de... tK2 at m ~VCONTR0RKIPPIL I~+ R tKT 2 TKT 5 ~I G E 1 i V OTN TRIL 2-REDN HNE IjT e UT 3 ~49__I 4.) 4 -4 - 4.-4 U 4.) k-4E-- Iz E-4 P E-44)-4. 4.)l 1s...indicate the channel failure. To reduce this deficiency , the new model computes a tolerance value equal to the maximum steady-state sample covariance of the

  5. Quantifying the sources of variability in equine faecal egg counts: implications for improving the utility of the method.

    Science.gov (United States)

    Denwood, M J; Love, S; Innocent, G T; Matthews, L; McKendrick, I J; Hillary, N; Smith, A; Reid, S W J

    2012-08-13

    The faecal egg count (FEC) is the most widely used means of quantifying the nematode burden of horses, and is frequently used in clinical practice to inform treatment and prevention. The statistical process underlying the FEC is complex, comprising a Poisson counting error process for each sample, compounded with an underlying continuous distribution of means between samples. Being able to quantify the sources of variability contributing to this distribution of means is a necessary step towards providing estimates of statistical power for future FEC and FECRT studies, and may help to improve the usefulness of the FEC technique by identifying and minimising unwanted sources of variability. Obtaining such estimates require a hierarchical statistical model coupled with repeated FEC observations from a single animal over a short period of time. Here, we use this approach to provide the first comparative estimate of multiple sources of within-horse FEC variability. The results demonstrate that a substantial proportion of the observed variation in FEC between horses occurs as a result of variation in FEC within an animal, with the major sources being aggregation of eggs within faeces and variation in egg concentration between faecal piles. The McMaster procedure itself is associated with a comparatively small coefficient of variation, and is therefore highly repeatable when a sufficiently large number of eggs are observed to reduce the error associated with the counting process. We conclude that the variation between samples taken from the same animal is substantial, but can be reduced through the use of larger homogenised faecal samples. Estimates are provided for the coefficient of variation (cv) associated with each within animal source of variability in observed FEC, allowing the usefulness of individual FEC to be quantified, and providing a basis for future FEC and FECRT studies. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. A note on estimating errors from the likelihood function

    International Nuclear Information System (INIS)

    Barlow, Roger

    2005-01-01

    The points at which the log likelihood falls by 12 from its maximum value are often used to give the 'errors' on a result, i.e. the 68% central confidence interval. The validity of this is examined for two simple cases: a lifetime measurement and a Poisson measurement. Results are compared with the exact Neyman construction and with the simple Bartlett approximation. It is shown that the accuracy of the log likelihood method is poor, and the Bartlett construction explains why it is flawed

  7. 15 Mcps photon-counting X-ray computed tomography system using a ZnO-MPPC detector and its application to gadolinium imaging

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Eiichi, E-mail: dresato@iwate-med.ac.jp [Department of Physics, Iwate Medical University, 2-1-1 Nishitokuta, Yahaba, Iwate 028-3694 (Japan); Sugimura, Shigeaki [Tokyo Denpa Co. Ltd., 82-5 Ueno, Ichinohe, Iwate 028-5321 (Japan); Endo, Haruyuki [Iwate Industrial Research Insutitute 3, 3-35-2 Shinden, Iioka, Morioka, Iwate 020-0852 (Japan); Oda, Yasuyuki [Department of Physics, Iwate Medical University, 2-1-1 Nishitokuta, Yahaba, Iwate 028-3694 (Japan); Abudurexiti, Abulajiang [Faculty of Software and Information Science, Iwate Prefectural University, 152-52 Sugo, Takizawa, Iwate 020-0193 (Japan); Hagiwara, Osahiko; Osawa, Akihiro; Matsukiyo, Hiroshi; Enomoto, Toshiyuki; Watanabe, Manabu; Kusachi, Shinya [3rd Department of Surgery, Toho University School of Medicine, 2-17-6 Ohashi, Meguro-ku, Tokyo 153-8515 (Japan); Sato, Shigehiro [Department of Microbiology, School of Medicine, Iwate Medical University, 19-1 Uchimaru, Morioka, Iwate 020-0023 (Japan); Ogawa, Akira [Department of Neurosurgery, School of Medicine, Iwate Medical University, 19-1 Uchimaru, Morioka, Iwate 020-0023 (Japan); Onagawa, Jun [Department of Electronics, Faculty of Engineering, Tohoku Gakuin University, 1-13-1 Chuo, Tagajo, Miyagi 985-8537 (Japan)

    2012-01-15

    15 Mcps photon-counting X-ray computed tomography (CT) system is a first-generation type and consists of an X-ray generator, a turntable, a translation stage, a two-stage controller, a detector consisting of a 2 mm-thick zinc-oxide (ZnO) single-crystal scintillator and an MPPC (multipixel photon counter) module, a counter card (CC), and a personal computer (PC). High-speed photon counting was carried out using the detector in the X-ray CT system. The maximum count rate was 15 Mcps (mega counts per second) at a tube voltage of 100 kV and a tube current of 1.95 mA. Tomography is accomplished by repeated translations and rotations of an object, and projection curves of the object are obtained by the translation. The pulses of the event signal from the module are counted by the CC in conjunction with the PC. The minimum exposure time for obtaining a tomogram was 15 min, and photon-counting CT was accomplished using gadolinium-based contrast media. - Highlights: Black-Right-Pointing-Pointer We developed a first-generation 15 Mcps photon-counting X-ray computed tomography (CT) system. Black-Right-Pointing-Pointer High-speed photon counting was carried out using a zinc-oxide (ZnO) single-crystal scintillator and an MPPC (multipixel photon counter) module in the X-ray CT system. Black-Right-Pointing-Pointer Tomography is accomplished by repeated translations and rotations of an object. Black-Right-Pointing-Pointer The minimum exposure time for obtaining a tomogram was 15 min. Black-Right-Pointing-Pointer The photon-counting CT was accomplished using gadolinium-based contrast media.

  8. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters.

    Science.gov (United States)

    Song, Jin Woo; Park, Chan Gook

    2018-04-21

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms.

  9. Triaxial Accelerometer Error Coefficients Identification with a Novel Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Yanbin Gao

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is one of the state-of-the-art swarm intelligence techniques, which is widely utilized for optimization purposes. Triaxial accelerometer error coefficients are relatively unstable with the environmental disturbances and aging of the instrument. Therefore, identifying triaxial accelerometer error coefficients accurately and being with lower costs are of great importance to improve the overall performance of triaxial accelerometer-based strapdown inertial navigation system (SINS. In this study, a novel artificial fish swarm algorithm (NAFSA that eliminated the demerits (lack of using artificial fishes’ previous experiences, lack of existing balance between exploration and exploitation, and high computational cost of AFSA is introduced at first. In NAFSA, functional behaviors and overall procedure of AFSA have been improved with some parameters variations. Second, a hybrid accelerometer error coefficients identification algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS approaches. This combination leads to maximum utilization of the involved approaches for triaxial accelerometer error coefficients identification. Furthermore, the NAFSA-identified coefficients are testified with 24-position verification experiment and triaxial accelerometer-based SINS navigation experiment. The priorities of MCS-NAFSA are compared with that of conventional calibration method and optimal AFSA. Finally, both experiments results demonstrate high efficiency of MCS-NAFSA on triaxial accelerometer error coefficients identification.

  10. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  11. Implication of spot position error on plan quality and patient safety in pencil-beam-scanning proton therapy

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G. [Division of Medical Physics, Department of Radiation Oncology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2014-08-15

    Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 to 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot

  12. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  13. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  14. Error exponents for entanglement concentration

    International Nuclear Information System (INIS)

    Hayashi, Masahito; Koashi, Masato; Matsumoto, Keiji; Morikoshi, Fumiaki; Winter, Andreas

    2003-01-01

    Consider entanglement concentration schemes that convert n identical copies of a pure state into a maximally entangled state of a desired size with success probability being close to one in the asymptotic limit. We give the distillable entanglement, the number of Bell pairs distilled per copy, as a function of an error exponent, which represents the rate of decrease in failure probability as n tends to infinity. The formula fills the gap between the least upper bound of distillable entanglement in probabilistic concentration, which is the well-known entropy of entanglement, and the maximum attained in deterministic concentration. The method of types in information theory enables the detailed analysis of the distillable entanglement in terms of the error rate. In addition to the probabilistic argument, we consider another type of entanglement concentration scheme, where the initial state is deterministically transformed into a (possibly mixed) final state whose fidelity to a maximally entangled state of a desired size converges to one in the asymptotic limit. We show that the same formula as in the probabilistic argument is valid for the argument on fidelity by replacing the success probability with the fidelity. Furthermore, we also discuss entanglement yield when optimal success probability or optimal fidelity converges to zero in the asymptotic limit (strong converse), and give the explicit formulae for those cases

  15. Recursive algorithms for phylogenetic tree counting.

    Science.gov (United States)

    Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J

    2013-10-28

    In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.

  16. Photon-counting image sensors

    CERN Document Server

    Teranishi, Nobukazu; Theuwissen, Albert; Stoppa, David; Charbon, Edoardo

    2017-01-01

    The field of photon-counting image sensors is advancing rapidly with the development of various solid-state image sensor technologies including single photon avalanche detectors (SPADs) and deep-sub-electron read noise CMOS image sensor pixels. This foundational platform technology will enable opportunities for new imaging modalities and instrumentation for science and industry, as well as new consumer applications. Papers discussing various photon-counting image sensor technologies and selected new applications are presented in this all-invited Special Issue.

  17. Cross-validation of the Dot Counting Test in a large sample of credible and non-credible patients referred for neuropsychological testing.

    Science.gov (United States)

    McCaul, Courtney; Boone, Kyle B; Ermshar, Annette; Cottingham, Maria; Victor, Tara L; Ziegler, Elizabeth; Zeller, Michelle A; Wright, Matthew

    2018-01-18

    To cross-validate the Dot Counting Test in a large neuropsychological sample. Dot Counting Test scores were compared in credible (n = 142) and non-credible (n = 335) neuropsychology referrals. Non-credible patients scored significantly higher than credible patients on all Dot Counting Test scores. While the original E-score cut-off of ≥17 achieved excellent specificity (96.5%), it was associated with mediocre sensitivity (52.8%). However, the cut-off could be substantially lowered to ≥13.80, while still maintaining adequate specificity (≥90%), and raising sensitivity to 70.0%. Examination of non-credible subgroups revealed that Dot Counting Test sensitivity in feigned mild traumatic brain injury (mTBI) was 55.8%, whereas sensitivity was 90.6% in patients with non-credible cognitive dysfunction in the context of claimed psychosis, and 81.0% in patients with non-credible cognitive performance in depression or severe TBI. Thus, the Dot Counting Test may have a particular role in detection of non-credible cognitive symptoms in claimed psychiatric disorders. Alternative to use of the E-score, failure on ≥1 cut-offs applied to individual Dot Counting Test scores (≥6.0″ for mean grouped dot counting time, ≥10.0″ for mean ungrouped dot counting time, and ≥4 errors), occurred in 11.3% of the credible sample, while nearly two-thirds (63.6%) of the non-credible sample failed one of more of these cut-offs. An E-score cut-off of 13.80, or failure on ≥1 individual score cut-offs, resulted in few false positive identifications in credible patients, and achieved high sensitivity (64.0-70.0%), and therefore appear appropriate for use in identifying neurocognitive performance invalidity.

  18. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Influence of calculation error of total field anomaly in strongly magnetic environments

    Science.gov (United States)

    Yuan, Xiaoyu; Yao, Changli; Zheng, Yuanman; Li, Zelin

    2016-04-01

    An assumption made in many magnetic interpretation techniques is that ΔTact (total field anomaly - the measurement given by total field magnetometers, after we remove the main geomagnetic field, T0) can be approximated mathematically by ΔTpro (the projection of anomalous field vector in the direction of the earth's normal field). In order to meet the demand for high-precision processing of magnetic prospecting, the approximate error E between ΔTact and ΔTpro is studied in this research. Generally speaking, the error E is extremely small when anomalies not greater than about 0.2T0. However, the errorE may be large in highly magnetic environments. This leads to significant effects on subsequent quantitative inference. Therefore, we investigate the error E through numerical experiments of high-susceptibility bodies. A systematic error analysis was made by using a 2-D elliptic cylinder model. Error analysis show that the magnitude of ΔTact is usually larger than that of ΔTpro. This imply that a theoretical anomaly computed without accounting for the error E overestimate the anomaly associated with the body. It is demonstrated through numerical experiments that the error E is obvious and should not be ignored. It is also shown that the curves of ΔTpro and the error E had a certain symmetry when the directions of magnetization and geomagnetic field changed. To be more specific, the Emax (the maximum of the error E) appeared above the center of the magnetic body when the magnetic parameters are determined. Some other characteristics about the error Eare discovered. For instance, the curve of Emax with respect to the latitude was symmetrical on both sides of magnetic equator, and the extremum of the Emax can always be found in the mid-latitudes, and so on. It is also demonstrated that the error Ehas great influence on magnetic processing transformation and inversion results. It is conclude that when the bodies have highly magnetic susceptibilities, the error E can

  20. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  1. Platelet counting using the Coulter electronic counter.

    Science.gov (United States)

    Eggleton, M J; Sharp, A A

    1963-03-01

    A method for counting platelets in dilutions of platelet-rich plasm using the Coulter electronic counter is described.(1) The results obtained show that such platelet counts are at least as accurate as the best methods of visual counting. The various technical difficulties encountered are discussed.

  2. Automated uranium analysis by delayed-neutron counting

    International Nuclear Information System (INIS)

    Kunzendorf, H.; Loevborg, L.; Christiansen, E.M.

    1980-10-01

    Automated uranium analysis by fission-induced delayed-neutron counting is described. A short description is given of the instrumentation including transfer system, process control, irradiation and counting sites, and computer operations. Characteristic parameters of the facility (sample preparations, background, and standards) are discussed. A sensitivity of 817 +- 22 counts per 10 -6 g U is found using irradiation, delay, and counting times of 20 s, 5 s, and 10 s, respectively. Presicion is generally less than 1% for normal geological samples. Critical level and detection limits for 7.5 g samples are 8 and 16 ppb, respectively. The importance of some physical and elemental interferences are outlined. Dead-time corrections of measured count rates are necessary and a polynomical expression is used for count rates up to 10 5 . The presence of rare earth elements is regarded as the most important elemental interference. A typical application is given and other areas of application are described. (auther)

  3. Count rate effect in proportional counters

    International Nuclear Information System (INIS)

    Bednarek, B.

    1980-01-01

    A new concept is presented explaining changes in spectrometric parameters of proportional counters which occur due to varying count rate. The basic feature of this concept is that the gas gain of the counter remains constant in a wide range of count rate and that the decrease in the pulse amplitude and the detorioration of the energy resolution observed are the results of changes in the shape of original current pulses generated in the active volume of the counter. In order to confirm the validity of this statement, measurements of the gas amplification factor have been made in a wide count rate range. It is shown that above a certain critical value the gas gain depends on both the operating voltage and the count rate. (author)

  4. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  5. Comparative evaluation of platelet count and antimicrobial efficacy of injectable platelet-rich fibrin with other platelet concentrates: An in vitro study

    Directory of Open Access Journals (Sweden)

    Prerna Ashok Karde

    2017-01-01

    Full Text Available Background: Platelet concentrates are used in various medical procedures to promote soft- and hard-tissue regeneration. In recent times, their antimicrobial efficacy is also explored. However, various platelet concentrates have evolved which differ in the centrifugation protocols. One such recently introduced platelet concentrate is injectable platelet-rich fibrin (i-PRF concentrate. Hence, the aim was to evaluate the antimicrobial property, and platelet count of i-PRF in comparison to other platelet concentrates, i.e., PRF, platelet-rich plasma (PRP, and control (whole blood. Materials and Methods: Blood samples were obtained from 10 chronic generalized marginal gingivitis patients. Platelet concentrates were prepared using standardized centrifugation protocol. Platelet count was evaluated by manual counting method using smear preparation of each sample. Subsequently, antimicrobial activity against oral bacteria was examined on blood agar using disc diffusion method to quantify the inhibitory effects. Results: Statistical significance was analyzed by one-way analysis of variance (ANOVA. P 0.05. i-PRF showed statistically significant difference (P < 0.001 in platelet count when compared to control. It was also significant when compared to PRP (P < 0.01, PRF (P < 0.001. Conclusion: i-PRF has maximum antimicrobial efficacy and higher platelet count in comparison to other platelet concentrates, thereby indicating to have a better regenerative potential then others.

  6. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  7. Resonance ionization spectroscopy: Counting noble gas atoms

    International Nuclear Information System (INIS)

    Hurst, G.S.; Payne, M.G.; Chen, C.H.; Willis, R.D.; Lehmann, B.E.; Kramer, S.D.

    1981-01-01

    The purpose of this paper is to describe new work on the counting of noble gas atoms, using lasers for the selective ionization and detectors for counting individual particles (electrons or positive ions). When positive ions are counted, various kinds of mass analyzers (magnetic, quadrupole, or time-of-flight) can be incorporated to provide A selectivity. We show that a variety of interesting and important applications can be made with atom-counting techniques which are both atomic number (Z) and mass number (A) selective. (orig./FKS)

  8. Counting and Surveying Homeless Youth: Recommendations from YouthCount 2.0!, a Community-Academic Partnership.

    Science.gov (United States)

    Narendorf, Sarah C; Santa Maria, Diane M; Ha, Yoonsook; Cooper, Jenna; Schieszler, Christine

    2016-12-01

    Communities across the United States are increasing efforts to find and count homeless youth. This paper presents findings and lessons learned from a community/academic partnership to count homeless youth and conduct an in depth research survey focused on the health needs of this population. Over a 4 week recruitment period, 632 youth were counted and 420 surveyed. Methodological successes included an extended counting period, broader inclusion criteria to capture those in unstable housing, use of student volunteers in health training programs, recruiting from magnet events for high risk youth, and partnering with community agencies to disseminate findings. Strategies that did not facilitate recruitment included respondent driven sampling, street canvassing beyond known hotspots, and having community agencies lead data collection. Surveying was successful in gathering data on reasons for homelessness, history in public systems of care, mental health history and needs, sexual risk behaviors, health status, and substance use. Youth were successfully surveyed across housing types including shelters or transitional housing (n = 205), those in unstable housing such as doubled up with friends or acquaintances (n = 75), and those who were literally on the streets or living in a place not meant for human habitation (n = 140). Most youth completed the self-report survey and provided detailed information about risk behaviors. Recommendations to combine research data collection with counting are presented.

  9. Improving the counting efficiency in time-correlated single photon counting experiments by dead-time optimization

    Energy Technology Data Exchange (ETDEWEB)

    Peronio, P.; Acconcia, G.; Rech, I.; Ghioni, M. [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2015-11-15

    Time-Correlated Single Photon Counting (TCSPC) has been long recognized as the most sensitive method for fluorescence lifetime measurements, but often requiring “long” data acquisition times. This drawback is related to the limited counting capability of the TCSPC technique, due to pile-up and counting loss effects. In recent years, multi-module TCSPC systems have been introduced to overcome this issue. Splitting the light into several detectors connected to independent TCSPC modules proportionally increases the counting capability. Of course, multi-module operation also increases the system cost and can cause space and power supply problems. In this paper, we propose an alternative approach based on a new detector and processing electronics designed to reduce the overall system dead time, thus enabling efficient photon collection at high excitation rate. We present a fast active quenching circuit for single-photon avalanche diodes which features a minimum dead time of 12.4 ns. We also introduce a new Time-to-Amplitude Converter (TAC) able to attain extra-short dead time thanks to the combination of a scalable array of monolithically integrated TACs and a sequential router. The fast TAC (F-TAC) makes it possible to operate the system towards the upper limit of detector count rate capability (∼80 Mcps) with reduced pile-up losses, addressing one of the historic criticisms of TCSPC. Preliminary measurements on the F-TAC are presented and discussed.

  10. Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions

    Science.gov (United States)

    Peacock, Sheila; Douglas, Alan; Bowers, David

    2017-08-01

    Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.

  11. Estimation of perspective errors in 2D2C-PIV measurements for 3D concentrated vortices

    Science.gov (United States)

    Ma, Bao-Feng; Jiang, Hong-Gang

    2018-06-01

    Two-dimensional planar PIV (2D2C) is still extensively employed in flow measurement owing to its availability and reliability, although more advanced PIVs have been developed. It has long been recognized that there exist perspective errors in velocity fields when employing the 2D2C PIV to measure three-dimensional (3D) flows, the magnitude of which depends on out-of-plane velocity and geometric layouts of the PIV. For a variety of vortex flows, however, the results are commonly represented by vorticity fields, instead of velocity fields. The present study indicates that the perspective error in vorticity fields relies on gradients of the out-of-plane velocity along a measurement plane, instead of the out-of-plane velocity itself. More importantly, an estimation approach to the perspective error in 3D vortex measurements was proposed based on a theoretical vortex model and an analysis on physical characteristics of the vortices, in which the gradient of out-of-plane velocity is uniquely determined by the ratio of the maximum out-of-plane velocity to maximum swirling velocity of the vortex; meanwhile, the ratio has upper limits for naturally formed vortices. Therefore, if the ratio is imposed with the upper limits, the perspective error will only rely on the geometric layouts of PIV that are known in practical measurements. Using this approach, the upper limits of perspective errors of a concentrated vortex can be estimated for vorticity and other characteristic quantities of the vortex. In addition, the study indicates that the perspective errors in vortex location, vortex strength, and vortex radius can be all zero for axisymmetric vortices if they are calculated by proper methods. The dynamic mode decomposition on an oscillatory vortex indicates that the perspective errors of each DMD mode are also only dependent on the gradient of out-of-plane velocity if the modes are represented by vorticity.

  12. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    Science.gov (United States)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  13. CalCOFI Larvae Counts Positive Tows

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Fish larvae counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets],...

  14. Analysis of neutron reflectivity data: maximum entropy, Bayesian spectral analysis and speckle holography

    International Nuclear Information System (INIS)

    Sivia, D.S.; Hamilton, W.A.; Smith, G.S.

    1991-01-01

    The analysis of neutron reflectivity data to obtain nuclear scattering length density profiles is akin to the notorious phaseless Fourier problem, well known in many fields such as crystallography. Current methods of analysis culminate in the refinement of a few parameters of a functional model, and are often preceded by a long and laborious process of trial and error. We start by discussing the use of maximum entropy for obtained 'free-form' solutions of the density profile, as an alternative to the trial and error phase when a functional model is not available. Next we consider a Bayesian spectral analysis approach, which is appropriate for optimising the parameters of a simple (but adequate) type of model when the number of parameters is not known. Finally, we suggest a novel experimental procedure, the analogue of astronomical speckle holography, designed to alleviate the ambiguity problems inherent in traditional reflectivity measurements. (orig.)

  15. Set of counts by scintillations for atmospheric samplings

    International Nuclear Information System (INIS)

    Appriou, D.; Doury, A.

    1962-01-01

    The author reports the development of a scintillation-based counting assembly with the following characteristics: a photo-multiplier with a wide photo-cathode, a thin plastic scintillator for the counting of beta + alpha (and possibility of mounting an alpha scintillator), a relatively small own motion with respect to activities to be counted, a weakly varying efficiency. The authors discuss the counting objective, present equipment tests (counter, proportional amplifier and pre-amplifier, input drawer). They describe the apparatus operation, discuss the selection of scintillators, report the study of the own movement (electron-based background noise, total background noise, background noise reduction), discuss counts (influence of the external source, sensitivity to alpha radiations, counting homogeneity, minimum detectable activity) and efficiencies

  16. Whales from space: counting southern right whales by satellite.

    Science.gov (United States)

    Fretwell, Peter T; Staniland, Iain J; Forcada, Jaume

    2014-01-01

    We describe a method of identifying and counting whales using very high resolution satellite imagery through the example of southern right whales breeding in part of the Golfo Nuevo, Península Valdés in Argentina. Southern right whales have been extensively hunted over the last 300 years and although numbers have recovered from near extinction in the early 20(th) century, current populations are fragmented and are estimated at only a small fraction of pre-hunting total. Recent extreme right whale calf mortality events at Península Valdés, which constitutes the largest single population, have raised fresh concern for the future of the species. The WorldView2 satellite has a maximum 50 cm resolution and a water penetrating coastal band in the far-blue part of the spectrum that allows it to see deeper into the water column. Using an image covering 113 km², we identified 55 probable whales and 23 other features that are possibly whales, with a further 13 objects that are only detected by the coastal band. Comparison of a number of classification techniques, to automatically detect whale-like objects, showed that a simple thresholding technique of the panchromatic and coastal band delivered the best results. This is the first successful study using satellite imagery to count whales; a pragmatic, transferable method using this rapidly advancing technology that has major implications for future surveys of cetacean populations.

  17. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  18. Track counting in radon dosimetry

    International Nuclear Information System (INIS)

    Fesenbeck, Ingo; Koehler, Bernd; Reichert, Klaus-Martin

    2013-01-01

    The newly developed, computer-controlled track counting system is capable of imaging and analyzing the entire area of nuclear track detectors. The high optical resolution allows a new analysis approach for the process of automated counting using digital image processing technologies. This way, higher exposed detectors can be evaluated reliably by an automated process as well. (orig.)

  19. CalCOFI Egg Counts Positive Tows

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Fish egg counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets], and...

  20. Performance of a Discrete Wavelet Transform for Compressing Plasma Count Data and its Application to the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    Science.gov (United States)

    Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.; hide

    2015-01-01

    Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.